You are on page 1of 184

V5.4.0.

cover

Front cover

Fibre Channel Storage for AIX


on Power Systems II:
Configuration & Management

(Course code QV572)

Student Notebook
ERC 1.0

UNIX Software Service Enablement


Student Notebook

Trademarks
IBM and the IBM logo are registered trademarks of International Business Machines
Corporation.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
AIX BladeCenter DB2
DS4000 DS6000 DS8000
FlashCopy Power Systems POWER
PowerHA PowerVM pSeries
Redbooks System p System Storage
xSeries
Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the
United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows is a trademark of Microsoft Corporation in the United States, other countries, or
both.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Other product and service names might be trademarks of IBM or other companies.

May 2011 edition


The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customers ability to evaluate and integrate them into the customers operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Copyright International Business Machines Corporation 2011.


This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
V5.4.0.3
Student Notebook

TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Course description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Unit 1. SAN with Power Systems Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
1.1. IBM Storage Product Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Topic 1: IBM Storage Product Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6
SAN in a Power Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
AIX Storage Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Booting from SAN Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Virtual I/O Server and SAN LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
IBM BladeCenter and SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
SAN Performance Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
SAN Problem Determination with Power Servers . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
IBM Disk Storage Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
QV572 Lab System LUN Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
USSE Lab Fibre Channel Switch Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
DS4000 Model 4300 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
DS6000 Model 6800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-18
N Series Model 3400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-19
1.2. AIX SAN Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-21
Topic 2: AIX SAN Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
AIX Command Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-23
Listing Devices - lsdev Command (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24
Listing Devices - lsdev Command (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25
Listing Devices - lscfg Command (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
Listing Devices - lscfg Command (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-27
Listing Device Attributes - lsattr Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-28
Listing LUN Paths - lspath Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-29
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-30
Exercise 1: Introduction to Lab Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-31
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-32

Unit 2. SAN Boot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
What is SAN Boot? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Benefits of Booting from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Disadvantages of SAN Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
SAN Boot Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
How to Place AIX on a SAN Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
alt_disk_copy Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8

Copyright IBM Corp. 2011 Contents iii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SMS Menu Selections - Main Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-9


SMS Menu Selections - Boot Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-10
SMS Menu Selections - Select Device Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-11
SMS Menu Selections - Select Media Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-12
SMS Menu Selections - Select Media Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-13
SMS Menu Selections - Select Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-14
Boot Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-15
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-16
Exercise 2: SAN Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-17
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-18

Unit 3. Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-3
3.1. Virtual I/O Server Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-5
Topic 1: Virtual I/O Server Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-6
Logical Partitioning Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-7
Virtual I/O Server - Physical Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-8
Virtual I/O Server - Virtual Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-9
Accessing Your VIO Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-10
Discovering Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-11
Identify Installed Physical Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-12
Identify Installed Disk Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-13
Add Virtual SCSI Channel Adapter (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-14
Add Virtual SCSI Channel Adapter (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-15
Add Virtual SCSI Channel Adapter (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-16
Identify Installed Virtual Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-17
Identify Device Mapping - lsmap Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-18
Identify WWN Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-19
3.2. Node Port ID Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-21
Topic 2: Node Port ID Virtualization (NPIV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-22
Virtual Fibre Channel Adapter (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-23
Virtual Fibre Channel Adapter (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-24
Virtual Fibre Channel Adapter (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-25
Virtual I/O Server - Virtual Paths - Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-26
Add Virtual Fibre Channel Adapter (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-27
Add Virtual Fibre Channel Adapter (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-28
Virtual Fibre Channel DLPAR Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-29
Map Virtual Server FibreChannel Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-30
View Fibre Channel Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-31
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-32
Exercise 3: Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-33
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-34

Unit 4. BladeCenter SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-2
4.1. BladeCenter Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-3
Topic 1: BladeCenter Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-4
BladeCenter Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-5

iv SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

TOC BladeCenter Architecture Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6


Power Blades - Current Offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Fibre Channel and IBM BladeCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Accessing IBM BladeCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Monitors - Hardware Vital Product Data (VPD) . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Blade Configuration - Open Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
I/O Module Tasks - Admin/Power/Restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
I/O Module Tasks - Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
I/O Module Tasks - Firmware Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
4.2. Integrated Virtualization Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Topic 2: Integrated Virtualization Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Operating System Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
Integrated Virtualization Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
View/Modify Partitions - Physical Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19
View/Modify Partitions - Storage - Virtual FC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
View Virtual Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21
Physical Device Information - AIX CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
Virtual Device Information - AIX CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-23
NPIV Addressing - Switch View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
Exercise 4: BladeCenter Demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-27

Unit 5. SAN Performance Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Trade-offs and Performance Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
Performance Metrics and Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Determine the Type of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6
AIX I/O Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
LVM Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Queueing I/Os . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Disk Drive Queue Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
Changing the Disk Queue Depth - Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Monitoring Disk Queues with iostat -D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Virtual SCSI Queue Depth Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
FC Disk Adapter Tuning Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Viewing FC Disk Adapter Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
The filemon Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
filemon - Most Active Files Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
filemon - Most Active LV and PV Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
filemon - Detailed File Stats Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
filemon - Detailed PV Stats Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Monitoring Adapter I/O Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21
Monitoring System Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22
Testing Sequential Throughput with time and dd. . . . . . . . . . . . . . . . . . . . . . . . . 5-23
Testing Throughput with ndisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Example Using ndisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25

Copyright IBM Corp. 2011 Contents v


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Tuning FC Disk and Adapter Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-26


Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-27
Exercise 5: SAN Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-28
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-29

Unit 6. Problem Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-1


Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-2
Fibre Channel Problem Determination Process . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-3
Before a Problem Occurs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-4
When a Problem Occurs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-5
Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-6
Disk and Path Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-7
Path Selection Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-8
Path Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-9
The iostat Path Utilization Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-10
Health Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-11
FC Path Management: Status and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-12
Disk Reservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-13
Detecting a SCSI Reserve Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-14
Switch Attach Fibre Channel Adapter Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . .6-15
Fast I/O Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-16
Dynamic Tracking of Fibre Channel Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-17
FCP_ERR10 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-18
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-19
Exercise 6: Problem Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-20
Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-21

Appendix A. Checkpoint solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1

vi SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM and the IBM logo are registered trademarks of International Business Machines
Corporation.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
AIX BladeCenter DB2
DS4000 DS6000 DS8000
FlashCopy Power Systems POWER
PowerHA PowerVM pSeries
Redbooks System p System Storage
xSeries
Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the
United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows is a trademark of Microsoft Corporation in the United States, other countries, or
both.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Other product and service names might be trademarks of IBM or other companies.

Copyright IBM Corp. 2011 Trademarks vii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

viii SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

pref Course description


Fibre Channel Storage for AIX on Power Systems II: Configuration &
Management

Purpose
This course is designed to provide enhanced knowledge of various
SAN-related configurations and SAN management practices with AIX
based Power Systems. Hands-on exercises will reinforce the lecture
and give students the practice configuring, managing, and performing
common operations relating to Fibre Channel based storage
subsystems, Virtual I/O Server configuration, SAN boot
considerations, and IBM Power Blades. The lab environment will
consist of Power Systems (such as model p6 520 servers), IBM Fibre
Channel Switches, and IBM storage servers (such as DS4000,
DS6000, and N series n3400).

Audience
The audiences for this training include AIX technical support
individuals, AIX developers, AIX system administrators, system
architects and engineers, product engineers, and post-sales support
teams.

Prerequisites
Students attending this course are expected to have knowledge of AIX
SAN operations. These skills can be obtained by attending the
following course:
AHQV334 - PowerVM Virtual I/O Server Configuration
AHQV571 - Fibre Channel Storage for AIX on Power Systems

Objectives
After completing this course, you should be able to:
Describe common interaction of AIX within a Fibre Channel (FC)
environment
Interpret AIX boot strategies using SAN
Select key FC related performance characteristics to monitor
Use AIX commands for performance monitoring

Copyright IBM Corp. 2011 Course description ix


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Contrast IBM Power Blade to IBM Power systems in a SAN


environment
Navigate Virtual IO Server FC devices
Initialize virtual FC resources
Define basic strategy for FC PD under AIX
Provide suggested solutions to common FC SAN problems under
AIX

x SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

pref Agenda
Block 1
Welcome
Unit 1 - SAN Overview
Exercise 1 - Introduction to Lab Environment
Unit 2 - SAN Boot
Exercise 2 - SAN Boot Operations

Block 2
Unit 3 - BladeCenter SAN
Exercise 3 - BladeCenter Demonstration
Unit 4 - Virtual I/O Server
Exercise 4 - Virtual I/O Server Operations

Block 3
Unit 5 - SAN Performance Monitoring
Exercise 5 - SAN Performance Monitoring

Block 4
Unit 6 - Problem Determination
Exercise 6 - Problem Determination

Copyright IBM Corp. 2011 Agenda xi


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Text highlighting
The following text highlighting conventions are used throughout this book:
Bold Identifies file names, file paths, directories, user names,
principals, menu paths and menu selections. Also identifies
graphical objects such as buttons, labels and icons that the
user selects.
Italics Identifies links to web sites, publication titles, is used where the
word or phrase is meant to stand out from the surrounding text,
and identifies parameters whose actual names or values are to
be supplied by the user.
Monospace Identifies attributes, variables, file listings, SMIT menus, code
examples and command output that you would see displayed
on a terminal, and messages from the system.
Monospace bold Identifies commands, subroutines, daemons, and text the user
would type.

xii SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty Unit 1. SAN with Power Systems Overview

What this unit is about


This unit describes how AIX interacts with various IBM storage
servers. We will discuss basic AIX commands that are useful in
managing storage devices.

What you should be able to do


After completing this unit, you should be able to:
Describe IBM SAN product offerings
Discuss basic aspects of SAN interaction with Power systems
Describe course lab environment
Use AIX commands to identify system resources

How you will check your progress


Accountability:
Review Questions
Machine exercises

References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
http://www.snia.org
Storage Network Industry Association (SNIA)
http://www.fiberchannel.org
Fibre Channel Industry Association (FCIA)
REDP-4517 Harnessing the SAN to Create a Smarter
Infrastructure Redbook
SG24-6050 Practical Guide for SAN with pSeries Redbook
SG24-5470 Introduction to Storage Area Networks Redbook

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SG24-7114 Introduction to Storage Infrastructure Simplification


Redbook
SG24-6116 Implementing an IBM b-type SAN with 8 Gbps
Directors and Switches

1-2 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Unit Objectives
After completing this unit, you should be able to:
Describe IBM SAN product offerings
Discuss basic aspects of SAN interaction with Power systems
Describe course lab environment
Use AIX commands to identify system resources

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-1. Unit Objectives QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

1-4 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty 1.1. IBM Storage Product Review

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1: IBM Storage Product Review


After completing this topic, you should be able to:
Describe IBM SAN product offerings
Discuss basic aspects of SAN interaction with
Virtual I/O server
IBM BladeCenter
Booting AIX from SAN
Basic performance monitoring
AIX trouble-shooting steps
Describe course lab environment

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-2. Topic 1: IBM Storage Product Review QV5721.0

Notes:

1-6 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

SAN in a Power Environment


AIX supports a variety of storage devices and technologies,
either connected locally or via a network

A Storage Area Network (SAN) is no longer something only


found in large data centers

AIX works with a wide variety of storage servers offered by


IBM, as well as other vendors products

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-3. SAN in a Power Environment QV5721.0

Notes:
Our discussion throughout this course will deal with Storage Area Network (SAN) devices,
specifically disk devices and how they attach to AIX. We will look at IBM product offerings,
and see how AIX interacts with these storage servers. SAN is not a new concept, it has
been available for many years. What is changing over time is the cost of running a SAN
continues to fall, making it available to many more customers. When you add the
virtualization capabilities of Power systems, SAN becomes a valuable asset to just about
any AIX installation.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

AIX Storage Paths


Storage
Server
FC
SATA
VIOS FATA SAN
SAN
Local Virtual FC
Fibre attached
Network attached Virtual SCSI Fibre
Channel SVC

AIX
Local
bus 10 Gb
Ethernet
FCoE iSCSI

Enhanced
NAS
Ethernet
Gateway
SCSI, SATA, SAS, USB Appliance
iSCSI
Ethernet

NAS NAS
Server Appliance

iSCSI NFS iSCSI NFS


UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-4. AIX Storage Paths QV5721.0

Notes:
Where once we only worried about locally attached storage devices, todays data center
can be a complex inter-connected web of devices. While AIX will recognize each storage
device as an hdisk, the actual storage device can be connected in a number of ways. The
visual above summarizes how Power systems access storage resources.
Our discussion will focus on the top portion of this visual, as we look at AIX and VIO server
configurations with a SAN.

1-8 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Booting from SAN Disk


By moving rootvg to a SAN LUN,
you create flexibility on how you
manage a system
No local disk required LAN
Mobility of rootvg
Replication of AIX image

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-5. Booting from SAN Disk QV5721.0

Notes:
If you have multiple AIX instances installed, booting from a SAN device can provide
flexibility, and reliability. You gain flexibility by enabling migration or duplication of the AIX
image throughout the SAN. Reliability is attained via the RAID setting of the storage server.
We will discuss SAN boot in unit 2.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Virtual I/O Server and SAN LUNs


You will find Fibre Channel connected
to Virtual I/O servers via
physical and virtual paths
Utilize dedicated
or virtual HBAs
AIX IBM i Linux VIOS
Support for AIX, LAN

IBM I, and Linux


instances VFCA VFCA VFCA

Pa Pb Pc
Pc
Pb
Pa
Hypervisor
NPIV Capable Fibre Channel HBA
Hardware

NPIV Capable SAN switch


UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-6. Virtual I/O Server and SAN LUNs QV5721.0

Notes:
As we mentioned previously, SAN becomes an important factor when we consider
virtualization of Power systems. This may include the VIO server owning the Fibre Channel
attached devices, or the virtualization of the Fibre Channel HBA. In either case, we need to
better understand how the VIO server interacts with Fibre Channel devices.
We will discuss the VIO server in unit 3.

1-10 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

IBM BladeCenter and SAN


The combination of Power Blades
and SAN storage provides a flexible
solution to deploy AIX
Take advantage of LAN
Power Virtualization
Deploy as few or as many
blades as needed, easy
upgrade path
Support for both physical

Power Blade
Power Blade
Power Blade
and virtual Fibre Channel

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-7. IBM BladeCenter and SAN QV5721.0

Notes:
IBM BladeCenter provides an excellent platform for running AIX, as well as many other
operating systems. Management of IBM BladeCenter resources, while not complex, does
provide a few differences as compared to a Power systems server.
We will discuss IBM BladeCenter in unit 4.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SAN Performance Considerations


Performance tuning has changed over time
From simple locally attached disk performance issues
Toward evaluation of more complex interactions and
configurations, including Fibre Channel and storage arrays

I/Os are queued at several


layers in the I/O stack,
including FC adapters

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-8. SAN Performance Considerations QV5721.0

Notes:
We will discuss AIX SAN performance issues in unit 5.

1-12 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

SAN Problem Determination with Power Servers


Identifying which path(s) should be used
Heath checking of paths
Who has access to a SAN disk?
Fast failure
how long to wait to
assume a link is down
Dynamic tracking
dynamically moving a FC
path between the fabric switch
and the storage subsystem

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-9. SAN Problem Determination with Power Servers QV5721.0

Notes:
We will discuss AIX problem determination steps in unit 6.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IBM Disk Storage Products


High end and Enterprise disk systems
DS8000, XIV, DS6000, IBM Scale Out Network Attach Storage
(SONAS)

Midrange and high performance computing systems


Storwize V7000, DS5000 family, DCS9900

Entry level
systems
DS3000 family

Network storage
appliances
N Series

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-10. IBM Disk Storage Products QV5721.0

Notes:
IBM offers a wide variety of storage products, from small scale entry level systems to
enterprise-class large scale products. We will have an opportunity to work with some of
these during our exercises. For a current listing of products, please refer to
http://www.ibm.com/systems/storage/disk/.

1-14 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

QV572 Lab System LUN Distribution


LUN LUN LUN LUN

LUN LUN LUN LUN LUN

LUN LUN LUN LUN


FC FC
HBA HBA

VIOS AIX

hdisk0 hdisk1
Software
hdisk0 hdisk1
Hardware
LUN LUN LUN
hdisk2 hdisk3 hdisk4 hdis5

Power system DS4300 DS6800 n3400


VIOS Installed on 1 internal disk
AIX Installed on 1 vscsi disk
Both LPARs are assigned 1 Fibre Channel Adapter
7 LUNs assigned to each LPAR
2 from DS4300 - 2 dedicated - 1 RAID 0 (16GB), 1 RAID 5 (16GB)
3 from DS6800 - 2 dedicated disks (20GB), 1 shared (30GB) all RAID 10
2 from n3400 - 2 dedicated disks (25GB each) all RAID 4
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-11. QV572 Lab System LUN Distribution QV5721.0

Notes:
This visual represents our lab environment. As you can see, we have configured a number
of LUNs to each of your lab systems, so you will have an opportunity to interact with a
number of technologies, and configuration options.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

USSE Lab Fibre Channel Switch Paths


When configuring a fabric, there are a number of ways to
interconnect devices
Physical or Logical boundaries, zoning separation, core-to-edge,
virtual fabrics
Our lab has 3 generations of IBM Fibre Channel switches in
production
2109 F16 2 Gb
2005 B32 4 Gb
2498 B24 8 Gb

Host port 0 - Side A


Host port 1 - Side B

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-12. USSE Lab Fibre Channel Switch Paths QV5721.0

Notes:
Between our Power system and storage server is the network fabric. When a storage fabric
is configured, there are two basic structures that can be followed, either a physical or
logical separation. A logical separation provides for the most flexibility, as all switches
interconnect, and you can adjust paths in a number of different ways. A physical
separation, as we use in our lab, does limit flexibility, but does give full separation between
paths. In either case, you need to have solid documentation, and make sure to label your
cables!

1-16 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

DS4000 Model 4300


Designed for:
Small to medium size business

Heritage:
Manufactured by LSI

Architecture:
2 Gb bus topology
Common management interface with DS3000 and DS5000

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-13. DS4000 Model 4300 QV5721.0

Notes:
The first of our storage servers is also the oldest. These servers were very popular 10
years ago, providing customers a modular approach to building s SAN. The base unit
contains 14 disks, and can be attached to multiple expansion units, as demands change.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DS6000 Model 6800


Designed for:
Medium to large enterprise
Both open systems and Mainframe servers

Heritage:
Designed and manufactured by IBM

Architecture:
2 Gb switch topology

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-14. DS6000 Model 6800 QV5721.0

Notes:
Our next storage server is our only device designed and manufactured by IBM. The
DS6000 was intended as an enterprise class storage server that could also attach to open
systems, which includes AIX. A customer that might be running AIX along side a
mainframe could look to the DS6000 as a platform that could attach to both systems. It is
also designed in a modular fashion, enabling a customer to purchase a base unit, and
attach additional expansion modules as needed.

1-18 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

N Series Model 3400


Designed for:
Small to large organizations
Easy upgrade path to increase capacity

Heritage:
Manufactured by NetApps

Architecture:
4 Gb switch topology

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-15. N Series Model 3400 QV5721.0

Notes:
Our last storage server is the most current in our lab. Introduced in 2010, the n3400 fits into
the entry-level size for N Series storage servers. The N Series product family provides for
Fibre Channel attachment as well as Ethernet attachment, and allows for a customer to
grow an installation to very large capacity without the need to learn a new management
platform.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

1-20 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty 1.2. AIX SAN Operations

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2: AIX SAN Operations


After completing this topic, you should be able to:
Use AIX commands to identify system resources
lscfg
lsdev
lsattr
lspath

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-16. Topic 2: AIX SAN Operations QV5721.0

Notes:

1-22 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

AIX Command Operations


Standard AIX commands such as lsdev or lscfg can be
valuable tools in managing SAN-backed storage devices

Some storage servers, such as DS4000, do utilize


device-specific commands

Third-party storage products may require additional


drivers or tools be installed for full functionality

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-17. AIX Command Operations QV5721.0

Notes:
While there are some special commands available for specific storage server products,
most of your work will involve standard AIX commands. Using common commands such as
lsdev or lscfg, you can gather critical information about devices backed by SAN storage
servers.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Listing Devices lsdev Command (1 of 2)


Displays devices in the system and their characteristics

## lsdev
lsdev -C
-C
.. .. .. All known devices appear

fcnet0
fcnet0 Defined
Defined 00-08-01
00-08-01 Fibre
Fibre Channel
Channel Network
Network Protocol
Protocol Device
Device
fcnet1
fcnet1 Defined
Defined 00-09-01
00-09-01 Fibre
Fibre Channel Network Protocol Device
Channel Network Protocol Device
fcs0
fcs0 Available
Available 00-08
00-08 FC
FC Adapter
Adapter
fcs1
fcs1 Available
Available 00-09
00-09 FC
FC Adapter
Adapter
fcs2
fcs2 Available
Available C5-T1
C5-T1 Virtual
Virtual Fibre
Fibre Channel
Channel Client
Client Adapter
Adapter
fscsi0
fscsi0 Available
Available 00-08-02
00-08-02 FC
FC SCSI
SCSI I/O
I/O Controller
Controller Protocol
Protocol Device
Device
fscsi1
fscsi1 Available
Available 00-09-02
00-09-02 FC
FC SCSI
SCSI I/O
I/O Controller
Controller Protocol
Protocol Device
Device
fscsi2
fscsi2 Available
Available C5-T1-01
C5-T1-01 FC
FC SCSI
SCSI I/O
I/O Controller
Controller Protocol
Protocol
.. .. ..

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-18. Listing Devices - lsdev Command (1 of 2) QV5721.0

Notes:
The lsdev command can be utilized to display the available ports. In the visual, there are
three ports, fcs0, fcs1, and fcs2. The first two HBAs are physical adapters, while fcs2 is a
virtual adapter. You can also see that while the physical adapters have a fcnet device
configured to them, the virtual adapter does not.

1-24 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Listing Devices lsdev Command (2 of 2)


To list only a specific device type, like disks, add the
-c argument
## lsdev
lsdev -Cc
-Cc disk
disk
hdisk0
hdisk0 Available
Available Virtual
Virtual SCSI
SCSI Disk
Disk Drive
Drive
hdisk1
hdisk1 Available
Available 00-08-02
00-08-02 MPIO
MPIO Other FC SCSI
Other FC SCSI Disk
Disk Drive
Drive Unknown
hdisk2
hdisk2 Available
Available 00-08-02
00-08-02 MPIO
MPIO Other
Other FC
FC SCSI
SCSI Disk
Disk Drive
Drive origin
hdisk3
hdisk3 Available
Available 00-08-02
00-08-02 MPIO
MPIO Other
Other FC
FC SCSI
SCSI Disk
Disk Drive
Drive
hdisk4
hdisk4 Available
Available 00-08-02
00-08-02 MPIO
MPIO Other
Other FC
FC SCSI
SCSI Disk
Disk Drive
Drive
hdisk5
hdisk5 Available
Available 00-08-02
00-08-02 MPIO
MPIO Other FC SCSI Disk Drive
Other FC SCSI Disk Drive
hdisk6
hdisk6 Available
Available 00-08-02
00-08-02 MPIO
MPIO Other
Other DS4K
DS4K Array
Array Disk
Disk
hdisk7
hdisk7 Available
Available 00-08-02
00-08-02 MPIO
MPIO Other
Other DS4K
DS4K Array
Array Disk
Disk
hdisk8
hdisk8 Available
Available Virtual SCSI Disk Drive
Virtual SCSI Disk Drive DS4000 backed
hdisk9
hdisk9 Available
Available Virtual
Virtual SCSI
SCSI Disk
Disk Drive
Drive LUNs
hdisk10
hdisk10 Available
Available Virtual
Virtual SCSI
SCSI Disk
Disk Drive
Drive
hdisk11
hdisk11 Available
Available Virtual
Virtual SCSI
SCSI Disk
Disk Drive
Drive
hdisk12
hdisk12 Available
Available Virtual SCSI Disk Drive
Virtual SCSI Disk Drive

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-19. Listing Devices - lsdev Command (2 of 2) QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Listing Devices lscfg Command (1 of 2)


Gather general, or specific, information about devices
## lscfg
lscfg |grep
|grep fcs
fcs Search for Fibre Channel HBAs
++ fcs2
fcs2 203.E4A.0666BF2-V2-C5-T1
203.E4A.0666BF2-V2-C5-T1 Virtual
Virtual Fibre
Fibre Channel
Channel Client
Client Adapter
Adapter
++ fcs0
fcs0 U789C.001.DQDC383-P1-C4-T1
U789C.001.DQDC383-P1-C4-T1 Adapter
Adapter
++ fcs1
fcs1 U789C.001.DQDC383-P1-C4-T2
U789C.001.DQDC383-P1-C4-T2 FC
FC Adapter
Adapter

## lscfg
lscfg -vl
-vl fcs0
fcs0 |grep
|grep Net
Net
Identify WWPN of port)
Network
Network Address.............10000000C96704E4
Address.............10000000C96704E4

## lscfg
lscfg -vl
-vl hdisk5
hdisk5 List detailed information about a device
hdisk5
hdisk5 U789C.001.DQDC383-P1-C4-T1-W500507630E01FC30-
U789C.001.DQDC383-P1-C4-T1-W500507630E01FC30-
L4011406200000000
L4011406200000000 MPIO
MPIO Other
Other FC
FC SCSI
SCSI Disk
Disk Drive
Drive
Manufacturer................IBM
Manufacturer................IBM
Machine
Machine Type
Type and
and Model......1750500
Model......1750500 DS6000-backed device
.. .. ..

Use getconf command to get size of unassigned disk

## getconf
getconf DISK_SIZE
DISK_SIZE /dev/hdisk5
/dev/hdisk5
20480
20480
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-20. Listing Devices - lscfg Command (1 of 2) QV5721.0

Notes:

1-26 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Listing Devices lscfg Command (2 of 2)


hdisk to LUN or volume definition
# lscfg -l hdisk0
hdisk0 U8203.E4A.0666BF2-V2-C2-T1-L8100000000000000 Virtual
SCSI Disk Drive
LUN
WWPN
# lscfg -l hdisk1
hdisk1 U789C.001.DQDC383-P1-C4-T1-W500A0982880D5ECF-L0 MPIO
Other FC SCSI Disk Drive

# lscfg -l hdisk3
hdisk3 U789C.001.DQDC383-P1-C4-T1-W500507630E01FC30-
L4010409100000000 MPIO Other FC SCSI Disk Drive

DS6000 Volume number


# lscfg -l hdisk6
hdisk6 U789C.001.DQDC383-P1-C4-T1-W200C00A0B80FA822-L0 MPIO
Other DS4K Array Disk

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-21. Listing Devices - lscfg Command (2 of 2) QV5721.0

Notes:
You should be able to identify the LUN assignment from the storage subsystem to AIX
using the lscfg command. This visual example shows three hdisk devices that are
provided by multiple storage servers (hdisk1 is a virtual SCSI disk provided by a VIO
server). One of the hdisk devices is provided by a DS6000 server, and this device shows a
LUN ID of 4. This is because the DS6000 groups individual logical drives (volumes) into
volume groups, and assigns them together. So, the address LUN 4 represents this volume
group.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Listing Device Attributes lsattr Command


## lsattr
lsattr -El
-El fcs0
fcs0
Port
bus_intr_lvl
bus_intr_lvl 273
273 Bus
Bus interrupt
interrupt level
level False
False
bus_io_addr
bus_io_addr 0xff800
0xff800 Bus
Bus I/O address
I/O address False
False
bus_mem_addr
bus_mem_addr 0xfff7e000
0xfff7e000 Bus
Bus memory
memory address
address False
False
init_link
init_link al
al INIT
INIT Link
Link flags
flags True
True
.. .. ..

Initiator
## lsattr
lsattr -El
-El fscsi0
fscsi0
attach
attach switch
switch How
How this
this adapter
adapter is
is CONNECTED
CONNECTED False
False
dyntrk
dyntrk yes
yes Dynamic
Dynamic Tracking
Tracking of
of FC
FC Devices
Devices True
True
fc_err_recov
fc_err_recov fast_fail
fast_fail FC
FC Fabric
Fabric Event
Event Error
Error RECOVERY
RECOVERY Policy
Policy True
True
.. .. ..

hdisk
## lsattr
lsattr -El
-El hdisk5
hdisk5
PCM
PCM PCM/friend/fcpother
PCM/friend/fcpother Path
Path Control
Control Module
Module False
False
algorithm
algorithm fail_over
fail_over Algorithm
Algorithm True
True
clr_q
clr_q no
no Device
Device CLEARS
CLEARS its
its Queue
Queue on
on error
error True
True
dist_err_pcnt
dist_err_pcnt 00 Distributed
Distributed Error
Error Percentage
Percentage True
True
dist_tw_width
dist_tw_width 50
50 Distributed
Distributed Error
Error Sample
Sample Time
Time True
True
hcheck_cmd
hcheck_cmd test_unit_rdy
test_unit_rdy Health Check Command
Health Check Command True
True
.. .. ..

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-22. Listing Device Attributes - lsattr Command QV5721.0

Notes:
Using the lsattr command, you can identify attributes of various resources under AIX.
This visual shows three key elements within the storage fabric within the Power server,
fcsX, fscsiX, and hdiskX. The third column reports whether the attribute can be modified
by the user (True).

1-28 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Listing LUN Paths lspath Command


Identify I/0 path(s) to SAN-backed hdisk(s)
## lspath
lspath
.. .. .. List all path definitions
Enabled
Enabled hdisk0
hdisk0 vscsi0
vscsi0
Enabled
Enabled hdisk1
hdisk1 fscsi0
fscsi0
Enabled hdisk1
Enabled hdisk1 fscsi1
fscsi1
Enabled
Enabled hdisk10
hdisk10 vscsi1
vscsi1
.. .. .. List information about a specific hdisk

## lspath
lspath -l
-l hdisk5
hdisk5 -F'status
-F'status name
name path_id
path_id parent
parent connection'
connection'
Enabled hdisk5 0 fscsi0 500507630e01fc30,4011406200000000
Enabled hdisk5 0 fscsi0 500507630e01fc30,4011406200000000
Enabled
Enabled hdisk5
hdisk5 11 fscsi0
fscsi0 500507630e03fc30,4011406200000000
500507630e03fc30,4011406200000000
Enabled
Enabled hdisk5
hdisk5 22 fscsi1
fscsi1 500507630e81fc30,4011406200000000
500507630e81fc30,4011406200000000
Enabled
Enabled hdisk5
hdisk5 33 fscsi1
fscsi1 500507630e83fc30,4011406200000000
500507630e83fc30,4011406200000000

## lspath
lspath -s
-s failed
failed
Failed
List failed path(s)
Failed hdisk1 fscsi0
hdisk1 fscsi0
Failed
Failed hdisk2
hdisk2 fscsi0
fscsi0

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-23. Listing LUN Paths - lspath Command QV5721.0

Notes:
The lspath command will show known paths between AIX and the storage server. If a link
has encountered a problem it will be placed in the failed state. The WWPN for disk drives
being provided by the storage subsystem can also be identified with the lspath
command.

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Checkpoint
1. IBM offers a network storage solution via the ________
product family.
2. The storage servers in our lab support the following top
speeds:
DS4300 _______
DS6800 _______
n3400 _______

3. To find a LUN value, use the ____________ command.


4. A quick way to show device information is via the
____________ command.
5. The __________ command can be used to identify device
attributes.

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-24. Checkpoint QV5721.0

Notes:

1-30 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Exercise 1: Introduction to Lab Environment


In this exercise, you will:
Access your assigned lab system
Use AIX commands to identify assigned LUN resources

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-25. Exercise 1: Introduction to Lab Environment QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary
IBM offers a wide range of disk storage systems that work
with AIX Power Systems
A storage network can be setup via physical boundaries, or
logical boundaries
Standard AIX commands such as lscfg, lsdev, or lsattr
play a vital role in SAN device management

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 1-26. Unit Summary QV5721.0

Notes:

1-32 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty Unit 2. SAN Boot

What this unit is about


This unit describes the benefits of booting AIX from a SAN disk, and
the methods to configure this process.

What you should be able to do


After completing this unit, you should be able to:
Interpret AIX boot strategies using SAN
Identify resources required to complete a SAN boot
Configure a SAN attached hdisk as a boot device

How you will check your progress


Accountability:
Review Questions
Machine exercises

References

http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
http://www.ibm.com/developerworks/wikis/display/
WikiPtype/AIXV53SANBoot
AIXV53SANBoot
http://www.wmduszyk.com/?p=3730&langswitch_lang=en
AIX and SAN Boot? Sure!
ftp://index.storsys.ibm.com/subsystem/aix/2.1.0.3/rd_sddpcm_aix.txt
SDDPCM Readme

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Interpret AIX boot strategies using SAN
Identify resources required to complete a SAN boot
Configure a SAN attached hdisk as a boot device

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-1. Unit Objectives QV5721.0

Notes:

2-2 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

What is SAN Boot?


Instead of placing AIX on a locally attached disk, install to a
SAN LUN

Take advantage of features provided by storage server you


install AIX to

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-2. What is SAN Boot? QV5721.0

Notes:
In 2002, IBM introduced support for the ability to boot the pSeries and xSeries systems
directly from SAN-based storage. In most cases, the computer (xSeries and pSeries), the
SAN HBA, the SAN switches and the storage arrays must conform to the latest firmware
levels. Once the firmware is current, the process of booting from the SAN is quite simple.
Boot from SAN - otherwise known as remote boot or root boot - refers to the server
configuration where the server OS is installed on a LUN that doesn't reside inside the
server chassis. Boot from SAN utilizes drives located in a disk-storage subsystem that are
connected via a HBA located in the server chassis.
Of course, different SAN storage servers call virtual disks by various names. For
consistency, we will refer to this process as booting from a SAN disk for the remainder of
this discussion.

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Benefits of Booting from SAN


Better I/O performance due to caching and striping across
multiple spindles

Availability with built in RAID protection

Ability to easily redeploy disk

Ability to FlashCopy the rootvg for backup

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-3. Benefits of Booting from SAN QV5721.0

Notes:
Better I/O performance due to caching and striping across multiple spindles
- Depending on the storage server, a LUN can be spread over many physical
volumes, something you would probably not do with a locally attached disk for
rootvg. Todays storage server provides high levels of cache to improve
performance even further.
Availability with built in RAID protection
- While it is possible to configure a storage server without some level of RAID
protection, it is highly unlikely a customer would choose to do this.
Ability to easily redeploy disk
- Because the disk is not attached to a specific server, you can redeploy the AIX
instance by re-assigning the LUN to another platform. Of course this will bring into
consideration various configuration issues, but if you have a basic AIX rootvg
configured, it can be easily moved.
Ability to FlashCopy the rootvg for backup
- If your storage server provides advanced backup functions, you can maintain a
backup strategy outside of the AIX tools.

2-4 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Disadvantages of SAN Boot


SAN problems can cause loss of access to

Potential loss of system dump and diagnosis if loss of access


to SAN is caused by a kernel bug

Potential issues with changes to multipath IO code

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-4. Disadvantages of SAN Boot QV5721.0

Notes:
There is almost always an opposite affect to a process. While booting from a SAN disk may
make perfect sense, there could be issues to consider. Of course, measuring the validity of
these will bring to mind other questions. For example, if you are unable to access the SAN
it is true you will not be able to boot your system. However, if your data is also located on
the SAN you face even larger issues. Also, configuring a dump space is a consideration,
though when is the last time you needed to analyze a dump report?

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SAN Boot Environment


Power server
FC HBA
Fabric devices LAN

Storage server

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-5. SAN Boot Environment QV5721.0

Notes:
To successfully boot from a SAN disk, each of these should be considered. Are there
firmware considerations? Is the fabric zoned correctly to allow for your device?

2-6 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

How to Place AIX on a SAN Resource


Install AIX directly to a SAN volume during installation

Copy existing rootvg to a SAN volume

Copy a rootvg image from one SAN volume to another

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-6. How to Place AIX on a SAN Resource QV5721.0

Notes:
There is no requirement to install AIX directly to a SAN disk for this boot process to work. If
you already have AIX installed to a local disk, you can move rootvg to a SAN disk using the
alt_disk_copy command.

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

alt_disk_copy Command
Create a copy of current rootvg

## alt_disk_copy
alt_disk_copy -d
-d hdisk4
hdisk4
Calling
Calling mkszfile to
mkszfile to create
create new
new /image.data
/image.data file.
file.
Checking disk sizes.
Checking disk sizes.
Creating
Creating cloned
cloned rootvg
rootvg volume
volume group
group and
and associated
associated logical
logical volumes.
volumes.
Creating
Creating logical
logical volume
volume alt_hd5
alt_hd5
.. .. ..
forced
forced unmount
unmount of
of /alt_inst
/alt_inst
Changing
Changing logical
logical volume
volume names
names in
in volume
volume group
group descriptor
descriptor area.
area.
Fixing
Fixing LV
LV control
control blocks...
blocks...
Fixing
Fixing file
file system
system superblocks...
superblocks...
Bootlist
Bootlist is set to the boot
is set to the boot disk:
disk: hdisk4
hdisk4 blv=hd5
blv=hd5

## lspv
lspv
hdisk0
hdisk0 000db98104bd16c5
000db98104bd16c5 rootvg
rootvg active
active
hdisk1
hdisk1 000db98150f86937
000db98150f86937 altinst_rootvg
altinst_rootvg
.. .. ..

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-7. alt_disk_copy Command QV5721.0

Notes:
The visual above shows an example of performing the alt_disk_copy command to create
a copy of rootvg found on hdisk0 on hdisk1. What is not shown in this example is the
process of confirming that hdisk1 is in fact the SAN disk you want to boot from ultimately.
This process requires you to reboot the system, which will make hdisk1 the new rootvg,
and hdisk0 old_rootvg.

2-8 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

SMS Menu Selections Main Menu


Our starting point for boot process
PowerPC
PowerPC Firmware
Firmware
Version
Version EL350_103
EL350_103
SMS
SMS 1.7
1.7 (c)
(c) Copyright
Copyright IBM
IBM Corp.
Corp. 2000,2008
2000,2008 All
All rights
rights reserved.
reserved.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Main
Main Menu
Menu
1.
1. Select
Select Language
Language
2.
2. Setup
Setup Remote
Remote IPL
IPL (Initial
(Initial Program
Program Load)
Load)
3.
3. Change
Change SCSI
SCSI Settings
Settings
4.
4. Select
Select Console
Console
5.
5. Select
Select Boot
Boot Options
Options

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation Keys:
Keys:

XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-8. SMS Menu Selections - Main Menu QV5721.0

Notes:
The next series of visuals deals with the process of selecting a SAN disk to boot from.
Our first menu is the primary SMS menu, and we will select option #5 Select Boot
Options.

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SMS Menu Selections Boot Device


Select option #1 Select Install/Boot Device
PowerPC
PowerPC Firmware
Firmware
Version
Version EL350_103
EL350_103
SMS
SMS 1.7
1.7 (c)
(c) Copyright
Copyright IBM
IBM Corp.
Corp. 2000,2008
2000,2008 All
All rights
rights reserved.
reserved.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Multiboot
Multiboot
1.
1. Select
Select Install/Boot
Install/Boot Device
Device
2.
2. Configure
Configure Boot
Boot Device
Device Order
Order
3.
3. Multiboot
Multiboot Startup
Startup <OFF>
<OFF>

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-9. SMS Menu Selections - Boot Device QV5721.0

Notes:
Our next menu will select the boot device.

2-10 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

SMS Menu Selections Select Device Type


While SAN is a Network, we will select Hard Drive
PowerPC
PowerPC Firmware
Firmware
Version
Version EL350_103
EL350_103
SMS
SMS 1.7
1.7 (c)
(c) Copyright
Copyright IBM
IBM Corp.
Corp. 2000,2008
2000,2008 All
All rights
rights reserved.
reserved.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Select
Select Device
Device Type
Type
1.
1. Diskette
Diskette
2.
2. Tape
Tape
3.
3. CD/DVD
CD/DVD
4.
4. IDE
IDE
5.
5. Hard
Hard Drive
Drive
6.
6. Network
Network
7.
7. List
List all
all Devices
Devices

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-10. SMS Menu Selections - Select Device Type QV5721.0

Notes:
From this menu we select option # 5. A SAN disk will appear to a Power system the same
as a locally attached disk.

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SMS Menu Selections Select Media Type


You can choose option #3 SAN, or option #9 List All Devices
PowerPC
PowerPC Firmware
Firmware
Version
Version EL350_103
EL350_103
SMS
SMS 1.7
1.7 (c)
(c) Copyright
Copyright IBM
IBM Corp.
Corp. 2000,2008
2000,2008 All
All rights
rights reserved.
reserved.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Select
Select Media
Media Type
Type
1.
1. SCSI
SCSI
2.
2. SSA
SSA
3.
3. SAN
SAN
4.
4. SAS
SAS
5.
5. SATA
SATA
6.
6. USB
USB
7.
7. IDE
IDE
8.
8. ISA
ISA
9.
9. List
List All
All Devices
Devices
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-11. SMS Menu Selections - Select Media Type QV5721.0

Notes:
The quickest way to find a SAN disk to boot from is actually option #9, since it will look for
all bootable devices. For discussion purposes we will be discussing option #3, so we can
see additional SMS menu screens.

2-12 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

SMS Menu Selections Select Media Adapter


Our example is simple (single HBA), your environment may
be more complex
PowerPC
PowerPC Firmware
Firmware
Version
Version EL350_103
EL350_103
SMS
SMS 1.7
1.7 (c)
(c) Copyright
Copyright IBM
IBM Corp.
Corp. 2000,2008
2000,2008 All
All rights
rights reserved.
reserved.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Select
Select Media
Media Adapter
Adapter
1.
1. U789C.001.DQD3055-P1-C1-T1
U789C.001.DQD3055-P1-C1-T1 /pci@800000020000204/fibre-channel@0
/pci@800000020000204/fibre-channel@0
2.
2. U789C.001.DQD3055-P1-C1-T2
U789C.001.DQD3055-P1-C1-T2 /pci@800000020000204/fibre-channel@0,1
/pci@800000020000204/fibre-channel@0,1
3.
3. List
List all
all devices
devices

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-12. SMS Menu Selections - Select Media Adapter QV5721.0

Notes:
This menu shows available FC HBAs installed on your system. This example shows a
single dual-ported adapter in slot C1.

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SMS Menu Selections Select Device


What if more than one rootvg is found?
PowerPC
PowerPC Firmware
Firmware
Version
Version EL350_103
EL350_103
SMS
SMS 1.7
1.7 (c)
(c) Copyright
Copyright IBM
IBM Corp.
Corp. 2000,2008
2000,2008 All
All rights
rights reserved.
reserved.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Select
Select Device
Device
Device
Device Current
Current Device
Device
Number
Number Position
Position Name
Name
1.
1. -- SCSI
SCSI 24
24 GB
GB Harddisk,
Harddisk, part=2
part=2 (AIX
(AIX 7.1.0)
7.1.0)
(( loc=U8203.E4A.10DB981-V3-C1-T1-L8100000000000000
loc=U8203.E4A.10DB981-V3-C1-T1-L8100000000000000 ))
2.
2. -- SCSI
SCSI 99 GB
GB FC
FC Harddisk,
Harddisk, part=2
part=2 (AIX
(AIX 7.1.0)
7.1.0)
(( loc=U789C.001.DQD3055-P1-C1-T1-W500a0982980d5ecf-L1000000000000
loc=U789C.001.DQD3055-P1-C1-T1-W500a0982980d5ecf-L1000000000000 ))
3.
3. 11 SCSI
SCSI 14
14 GB
GB FC
FC Harddisk,
Harddisk, part=2
part=2 (AIX
(AIX 7.1.0)
7.1.0)
(( loc=U789C.001.DQD3055-P1-C1-T1-W200c00a0b80fa822-L0
loc=U789C.001.DQD3055-P1-C1-T1-W200c00a0b80fa822-L0 ))

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-13. SMS Menu Selections - Select Device QV5721.0

Notes:
Any disk that the Power system firmware detects to have a boot record will be identified in
this menu. In our example we see a virtual SCSI disk (device #1), and we also see two
other disks. These additional disks are attached via SAN, though there is no direct
comment to this on the screen. We know this because the location codes provided are
WWNs of two different SAN storage servers. If we have documented out SAN correctly, we
can find the correct device.

2-14 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Boot Message
Identify where boot device is
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
||
Elapsed
Elapsed time
time since
since release
release of
of system
system processors:
processors: 63293
63293 mins
mins 11 secs
secs

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Welcome
Welcome to
to AIX.
AIX.
boot
boot image
image timestamp:
timestamp: 20:00
20:00 02/03
02/03
The
The current
current time
time and
and date:
date: 20:11:18
20:11:18 02/03/2011
02/03/2011
processor
processor count:
count: 2;
2; memory
memory size:
size: 1024MB;
1024MB; kernel
kernel size:
size: 35064581
35064581
boot
boot device:
device: /pci@800000020000204/fibre-channel@0/disk@200c00a0b80fa822
/pci@800000020000204/fibre-channel@0/disk@200c00a0b80fa822
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-14. Boot Message QV5721.0

Notes:
The AIX splash screen will show you the source of AIX. In our example above we are
booting from a device at address
/pci@800000020000204/fibre-channel@0/disk@200c00a0b80fa822.

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Checkpoint
1. What two ways can AIX can be installed on a SAN disk?
____________________________________________________
____________________________________________________

2. To select a SAN disk to boot from, utilize the SMS menu


system to select a boot from either:
____________________________________________________
____________________________________________________

3. True/False: There can be more than one rootvg source


from SAN.

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-15. Checkpoint QV5721.0

Notes:

2-16 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Exercise 2: SAN Boot


In this exercise, you will:
Create a rootvg image backup to a SAN LUN
Boot the LPAR from a new boot source

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-16. Exercise 2: SAN Boot QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 2. SAN Boot 2-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary
Booting AIX from a SAN disk provides much flexibility to a
customer
Ability to easily copy and deploy AIX instances
Multiple AIX instances can exist and be chosen from
You can either load directly to a SAN disk, or copy an existing
AIX image to a SAN disk

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 2-17. Unit Summary QV5721.0

Notes:

2-18 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty Unit 3. Virtual I/O Server

What this unit is about


The unit continues the discussion of virtual storage devices. Students
will learn how to create SAN-backed virtual disks. In addition, students
will learn about the implementation of virtual Fibre Channel adapters
and the concept of N-Port ID Virtualization (NPIV).

What you should be able to do


After completing this unit, you should be able to:
Differentiate between in-band and out-of-band management
Describe management methods used with various SAN resources
Discuss the role of the Host Bus Adapter (HBA)
Identify AIX Fibre Channel device names
Navigate AIX and analyze output from basic commands
Define the concept of MPIO
Use AIX commands to show MPIO aspects of storage devices

How you will check your progress


Accountability:
Review Questions
Machine exercises

References
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp
Virtual I/O Server product documentation from the IBM
Systems Hardware Information Center
http://techsupport.services.ibm.com/server/vios
Virtual I/O Server Support
http://www.ibm.com/developerworks/systems/articles/DLPARchecklist.
html
Dynamic LPAR tips and checklists
SG24-7940 PowerVM Virtualization on IBM System p Introduction
and Configuration

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SG24-7590 PowerVM Virtualization on IBM System p Managing


and Monitoring

3-2 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Unit Objectives
After completing this unit, you should be able to:
Discuss VIO server architecture as it relates to Fibre Channel
Describe how to configure virtual Fibre Channel adapters
Utilize VIO server commands to identify Fibre Channel
resources
Differentiate between physical and virtual Fibre Channel
resources

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-1. Unit Objectives QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

3-4 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty 3.1. Virtual I/O Server Overview

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1: Virtual I/O Server Overview


After completing this topic, you should be able to:
Discuss VIO server architecture as it relates to Fibre Channel
Utilize VIO server commands to identify Fibre Channel
resources
Create a virtual disk using an fcs device under a VIO server

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-2. Topic 1: Virtual I/O Server Overview QV5721.0

Notes:

3-6 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Logical Partitioning Review


Logical Partitions enable multiple OS instances
Physical resources, such as processors, are virtualized

AIX IBM i Linux VIOS

VP VP VP VP VP VP VP VP VP VP

Hypervisor
Shared Processor Pool
core core core core core core
Hardware

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-3. Logical Partitioning Review QV5721.0

Notes:
IBM Power servers utilize a hypervisor to create virtual resources to various operating
systems. As the visual example above shows, the hypervisor enables core functions, like
processing, to be virtualized.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Virtual I/O Server Physical Paths


Initial method of providing virtual storage to LPARs required
virtual SCSI devices

AIX IBM i Linux VIOS

VSA VSA VSA


VSA
VSA
VSA
Hypervisor
SCSI / Fibre Channel HBA
Hardware

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-4. Virtual I/O Server - Physical Paths QV5721.0

Notes:
To virtualize storage resources, a Virtual I/O (VIO) server is utilized. This specialized
operating systems sole purpose is to provide a virtualization path for storage and
networking within a Power server.
The visual above shows an example of configuring virtual SCSI paths from the VIO server
to three different logical partitions (LPARs). In this example, we have assigned either a
SCSI or Fibre Channel HBA to the VIO server, which then owns the physical disks. These
physical devices can then be assigned either whole or in part as virtual resources to LPAR.

3-8 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Virtual I/O Server Virtual Paths


With addition of virtual Fibre Channel devices, we can now
provide additional virtual storage

AIX IBM i Linux VIOS

VFCA VFCA VFCA


Pa Pb Pc Pc
Pb
Pa
Hypervisor
NPIV Capable Fibre Channel HBA
Hardware

NPIV Capable SAN switch


UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-5. Virtual I/O Server - Virtual Paths QV5721.0

Notes:
Another method of providing storage resources via the VIO server is through virtual Fibre
Channel (vFC) devices. This method moves the storage device ownership to the LPAR by
assigning a virtual HBA to the LPAR, instead of sharing a disk owned by the VIO server.
We will discuss virtual Fibre Channel in greater detail in the next topic.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Accessing Your VIO Server


Utilize GUI to access your VIO server

Access CLI for text-based operations

login
login as:
as: padmin
padmin
padmin@9.47.87.85's
padmin@9.47.87.85's password:
password:
Last
Last unsuccessful
unsuccessful login:
login: Mon
Mon Mar
Mar 14
14 13:49:23
13:49:23 PDT
PDT 2011
2011 on
on /dev/vty0
/dev/vty0
Last
Last login:
login: Mon
Mon Mar
Mar 14
14 13:55:24
13:55:24 PDT
PDT 2011
2011 on
on /dev/vty0
/dev/vty0
$$ lsdev
lsdev |grep
|grep fcs
fcs
fcs0
fcs0 Available
Available 8Gb
8Gb PCI
PCI Express
Express Dual
Dual Port
Port FC
FC Adapter
Adapter (df1000f114108a03)
(df1000f114108a03)
fcs1
fcs1 Available
Available 8Gb
8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
PCI Express Dual Port FC Adapter (df1000f114108a03)

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-6. Accessing Your VIO Server QV5721.0

Notes:
You can manage your VIO server from either a GUI, or CLI. Each method provides its own
advantages and disadvantages.In the case of the CLI, you have the VIO server interface,
or you can access an AIX shell via the oem_setup_env command.

3-10 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Discovering Devices
From the VIO server prompt, use the cfgdev command
Similar to the cfgmgr command under AIX

$$ cfgdev
cfgdev

Some
Some error
error messages
messages may
may contain
contain invalid
invalid information
information
for
for the Virtual
the Virtual I/O
I/O Server environment.
Server environment.

Method
Method error
error (/etc/methods/cfgpcmui
(/etc/methods/cfgpcmui -l
-l dac0
dac0 ):
):
0514-082
0514-082 The requested function could
The requested function could only
only be
be performed
performed for
for some
some
of the specified paths.
of the specified paths.
.. .. ..

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-7. Discovering Devices QV5721.0

Notes:
Before you can use a Fibre Channel resource, you need to verify it is configured. If there is
no fcs device, you need to discover the resource. This can be done with the cfgdev
command. You can also access the AIX shell to run the cfgmgr command.
While this process happens when a system is booted, with many generations of Fibre
Channel HBAs the laser is deactivated if no devices are found downstream. Running the
cfgdev command will re-activate the laser, and attempt to re-acquire any storage servers
on the network.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Identify Installed Physical Devices


The first step is to identify what physical devices are
configured and available to your VIO server LPAR

Fibre Channel devices are named the same as under AIX

$$ lsdev
lsdev -type
-type adapter
adapter Device currently active
name
name status
status description
description
.. .. ..
fcs0
fcs0 Available
Available 8Gb
8Gb PCI
PCI Express
Express Dual
Dual Port
Port FC
FC Adapter
Adapter (df1000f114108a03)
(df1000f114108a03)
fcs1
fcs1 Available
Available 8Gb
8Gb PCI
PCI Express
Express Dual
Dual Port
Port FC
FC Adapter
Adapter (df1000f114108a03)
(df1000f114108a03)
fcs2
fcs2 Defined
Defined 4Gb
4Gb FC
FC PCI
PCI Express
Express Adapter
Adapter (df1000fe)
(df1000fe)
fcs3
fcs3 Defined
Defined 4Gb
4Gb FC
FC PCI
PCI Express
Express Adapter
Adapter (df1000fe)
(df1000fe)
.. .. ..

Device previously known to this system

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-8. Identify Installed Physical Devices QV5721.0

Notes:
The lsdev command, similar to under AIX, will provide you a list of known devices. If the
device shows as Available, you can utilize the device. If the device shows as Defined, it
was known of once, but it not currently responding to requests.

3-12 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Identify Installed Disk Resources


Specify disk as part of lsdev command to gather physical
and virtual disk information
$$ lsdev
lsdev -type
-type disk
disk
name
name status
status description
description
hdisk0
hdisk0 Available
Available SAS
SAS Disk
Disk Drive
Drive
.. .. ..
hdisk7
hdisk7 Available
Available MPIO
MPIO Other
Other FC
FC SCSI
SCSI Disk
Disk Drive
Drive
name
name status
status description
description
vtscsi0
vtscsi0 Available
Available Virtual
Virtual Target
Target Device
Device Disk
Disk
$$ lsdev
lsdev dev
dev hdisk7
hdisk7 vpd
vpd
hdisk17
hdisk17 U789C.001.DQDC383-P1-C1-T1-W200D00A0B80FA822-
U789C.001.DQDC383-P1-C1-T1-W200D00A0B80FA822-
L50000000000000
L50000000000000 MPIO
MPIO Other
Other DS4K
DS4K Array
Array Disk
Disk

Manufacturer................IBM
Manufacturer................IBM LUN
Machine
Machine Type
Type and
and Model......1722-600
Model......1722-600
ROS
ROS Level and ID............30393134
Level and ID............30393134
Serial
Serial Number...............
Number............... Machine
Device
Device Specific.(Z0)........0000053245004032
Specific.(Z0)........0000053245004032 Type

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-9. Identify Installed Disk Resources QV5721.0

Notes:
Using the lsdev command can also be used to show known storage devices. The output
will show both physical and virtual devices.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Add Virtual SCSI Channel Adapter (1 of 3)


To attach a Fibre Channel device owned by the VIO server,
create a virtual SCSI (vSCSI) device on both the VIO server
LPAR and the AIX LPAR

Using the GUI

Using the CLI from the HMC


$$ chsyscfg
chsyscfg -r
-r prof
prof -m
-m <MANAGED
<MANAGED SYSTEM>
SYSTEM> -i
-i "name-normal,
"name-normal, lparname-LPAR1,
lparname-LPAR1,
virtual_scsi_adapters=<DRC
virtual_scsi_adapters=<DRC INDEX
INDEX >>

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-10. Add Virtual SCSI Channel Adapter (1 of 3) QV5721.0

Notes:

3-14 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Add Virtual SCSI Channel Adapter (2 of 3)


Add the virtual Fibre Channel adapters in a client/server pair,
just like virtual SCSI adapters

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-11. Add Virtual SCSI Channel Adapter (2 of 3) QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Add Virtual SCSI Channel Adapter (3 of 3)


Resulting client SCSI channel adapter configuration

Use the mkvdev command on the VIO server to create the


device, then lsdev -c adapter on the client LPAR
$$ mkvdev
mkvdev -f
-f -vdev
-vdev hdiskn
hdiskn -vadapter
-vadapter vhostn
vhostn

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-12. Add Virtual SCSI Channel Adapter (3 of 3) QV5721.0

Notes:

3-16 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Identify Installed Virtual Devices


Once devices are configured to your VIO server, you can
identify resources names and status via the lsdev command

$$ lsdev
lsdev -virtual
-virtual -type
-type adapter
adapter
name
name status
status description
description
vasi0
vasi0 Available
Available Virtual
Virtual Asynchronous
Asynchronous Services
Services Interface
Interface (VASI)
(VASI)
vbsd0
vbsd0 Available
Available Virtual Block Storage Device (VBSD)
Virtual Block Storage Device (VBSD)
vfchost0
vfchost0 Defined
Defined Virtual
Virtual FC
FC Server
Server Adapter
Adapter
vfchost1 Available
vfchost1 Available Virtual
Virtual FC Server Adapter
FC Server Adapter
vhost0
vhost0 Available
Available Virtual
Virtual SCSI
SCSI Server
Server Adapter
Adapter
vhost1
vhost1 Available
Available Virtual
Virtual SCSI
SCSI Server
Server Adapter
Adapter
vsa0
vsa0 Available
Available LPAR
LPAR Virtual
Virtual Serial
Serial Adapter
Adapter

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-13. Identify Installed Virtual Devices QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Identify Device Mapping lsmap Command


$$ lsmap
lsmap -all
-all -type
-type disk
disk -field
-field vtd
vtd backing
backing List all (virtual) disks
VTD
VTD vtscsi0
vtscsi0
Backing
Backing device
device hdisk1
hdisk1
and their backing
VTD
VTD vtscsi1
vtscsi1 devices
Backing device
Backing device hdisk2
hdisk2
.. .. ..
$$ lsmap
lsmap -vadapter
-vadapter vhost1
vhost1
List backing devices to
SVSA
SVSA Physloc
Physloc a given virtual path Client
Client Partition
Partition
ID
ID
---------------
--------------- --------------------------------------------
-------------------------------------------- ----------------
----------------
vhost1
vhost1 U8203.E4A.0666BF2-V1-C12
U8203.E4A.0666BF2-V1-C12 0x00000002
0x00000002
VTD
VTD vtscsi2
vtscsi2
Status
Status Available
Available
LUN
LUN 0x8100000000000000
0x8100000000000000
Backing
Backing device
device hdisk10
hdisk10
Physloc
Physloc U789C.001.DQDC383-P1-C1-T1-W500A0982980D5ECF-
U789C.001.DQDC383-P1-C1-T1-W500A0982980D5ECF-
L100000000000
L100000000000 00
Mirrored
Mirrored false
false
.. .. ..

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-14. Identify Device Mapping - lsmap Command QV5721.0

Notes:

3-18 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Identify WWN Information


To assign a LUN to your VIO server, you will need the WWPN

WWPN for assigning to


storage server
$$ lsdev
lsdev -dev
-dev fcs0
fcs0 -vpd
-vpd || grep
grep Network
Network
Network Address.............10000000C9998B2C
Network Address.............10000000C9998B2C

Be aware of zoning within the SAN


Some software requires zoning to be in place, so even though your
device may appear to be working correctly, it could be outside of the
zone

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-15. Identify WWN Information QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

3-20 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty 3.2. Node Port ID Virtualization

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2: Node Port ID Virtualization (NPIV)


After completing this topic, you should be able to:
Define the concepts of virtual Fibre Channel
Describe how to configure virtual Fibre Channel adapters

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-16. Topic 2: Node Port ID Virtualization (NPIV) QV5721.0

Notes:

3-22 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Virtual Fibre Channel Adapter (1 of 3)


Virtual Fibre Channel adapters support the use of N-Port ID
Virtualization (NPIV)

NPIV is a standardized method for virtualizing a physical


Fibre Channel port
Requires support in physical Fibre Channel adapter and SAN switch

Enables LPARs to have virtual Fibre Channel Host Bus Adapters


(HBAs), each with a dedicated world wide port name (WWPN)
This gives each virtual Fibre Channel HBA a unique SAN identity similar to
that of a dedicated physical HBA

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-17. Virtual Fibre Channel Adapter (1 of 3) QV5721.0

Notes:
This industry standard method allows multiple initiators to share a single physical port,
easing hardware requirements for Storage Area Networks. In the PowerVM case, the VIOS
partition will have the actual physical connection.
The host bus adapter (HBA) is the name used for the Fibre Channel adapter.
A World wide port name (WWPN) is the unique identifier for a port on a Fibre Channel
fabric. Think of it as being similar to a Media Access Control (MAC) address for Ethernet
ports.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-23


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Virtual Fibre Channel Adapter (2 of 3)


With virtual SCSI, the storage is virtualized
The client operating system cannot distinguish between different types
of backing storage (SAN, SAS, etc.) because the VIOS uses SCSI
emulation
With NPIV, the Fibre Channel Port (FCP) is virtualized
The VIO server serving NPIV acts is a pass through, providing an FCP
connection from the client to the SAN
The result is that the client partition operating system is a unique
identity to the SAN and storage can be zoned to it, just as if it had a
dedicated physical HBA
The client can see storage zoned for its WWPN and recognize
different types of physical storage
Like virtual SCSI, virtual Fibre Channel is implemented with a
physical adapter and a virtual server adapter on the VIOS,
and a virtual client adapter on the client
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-18. Virtual Fibre Channel Adapter (2 of 3) QV5721.0

Notes:
With NPIV, the VIOSs role is fundamentally different. The VIOS facilitates adapter sharing
only; there is no device level abstraction or emulation. Rather than a storage virtualizer, the
VIOS serving NPIV is a pass through, providing a Fibre Channel Port (FCP) connection
from the client to the SAN.
If you use VSCSI disks, you need to use the VIOS to create each VSCSI VTD and map
each to the vhost adapter for each client. With NPIV, the SAN can zone storage to the client
LPARs WWPN. You do not need to create the individual VTDs on the VIOS for each virtual
disk. This reduces the amount of VIOS management needed.
Just like virtual SCSI, when configuring the virtual Fibre Channel adapters you specify the
VIOS partition name and virtual adapter ID in the client configuration, and you can specify
the client information when configuring the VIOS server adapter.

3-24 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Virtual Fibre Channel Adapter (3 of 3)


Requires:
POWER6 (or above) processor-based servers
The physical Fibre Channel adapter and SAN switch must support
NPIV
VIOS 2.1.0.10 (fixpack20.1) or later
Update to VIOS V2.1 then install this fixpack
HMC V7R3.4.0 (Service Pack 0) or later
Supported client operating systems:
AIX version 6.1 Technology Level 2, or later
AIX 5.3 Technology Level 9
IBM i version 6.1.1, or later
SUSE Linux Enterprise Server (SLES) 11, or later
NPIV enhancements in VIOS 2.1.2.0
Support for dynamic remapping of server virtual Fibre Channel (vfc#)
adapter to physical FC port without loss of connectivity between the
VIOS and the client

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-19. Virtual Fibre Channel Adapter (3 of 3) QV5721.0

Notes:
Not all Fibre Channel adapters and SAN switches support NPIV. NPIV-capable switches
present the virtual WWPN to other SAN switches and devices as if they represent physical
FC adapter endpoints. Additional SAN switches and disk/tape devices dont need to be
NPIV aware.
The dynamic remapping feature is for ease of maintenance. For example, if you need to
replace an adapter, you can dynamically remap the virtual server adapter(s) to another
physical adapter without interrupting the client I/O operations.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-25


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Virtual I/O Server Virtual Paths Review


Configuration of virtual devices on both VIO server and LPAR

AIX IBM i Linux VIOS

VFCA VFCA VFCA


Pa Pb Pc Pc
Pb
Pa
Hypervisor
NPIV Capable Fibre Channel HBA
Hardware

NPIV Capable SAN switch

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-20. Virtual I/O Server - Virtual Paths - Review QV5721.0

Notes:

3-26 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Add Virtual Fibre Channel Adapter (1 of 2)


Add the virtual Fibre Channel adapters in a client/server pair,
just like virtual SCSI adapters

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-21. Add Virtual Fibre Channel Adapter (1 of 2) QV5721.0

Notes:
The visual above shows the HMC GUI to create the virtual Fibre Channel adapter. Just like
virtual SCSI adapters, it can be created in the Create LPAR wizard, in a partitions profile,
or dynamically.
Notice that just like for VSCSI adapters, the client must specify the VIOS partition name
and adapter ID. The VIOS virtual adapter will also point to the client virtual adapter. There
is always a one-to-one relationship between client adapters and server adapters.
Multiple virtual server adapters can be created for a single physical Fibre Channel port.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-27


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Add Virtual Fibre Channel Adapter (2 of 2)


Resulting client Fibre Channel adapter configuration

Adapter listing from lsdev-c adapter in the client LPAR

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-22. Add Virtual Fibre Channel Adapter (2 of 2) QV5721.0

Notes:
View added virtual Fibre Channel adapter
The virtual Fibre Channel adapter uses a virtual adapter slot like other virtual adapters and
its properties can be seen in the partition properties or the partition profile properties.
Notice the fcs device naming convention in the lsdev output.

3-28 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Virtual Fibre Channel DLPAR Operations


Each time a virtual Fibre Channel adapter is configured, the
HMC obtains a new, non-reusable, pair of WWPNs from the
Hypervisor

WWPNs obtained by a DLPAR add operations for virtual


Fibre Channel adapters that are not saved to a profile will be
discarded if the partition is shut down or if the adapters are
removed

For continued access to the storage, save the partition


configuration in a new profile so that the WWPNs are saved

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-23. Virtual Fibre Channel DLPAR Operations QV5721.0

Notes:
WWPNs are generated based on the range of names available for use based on a prefix in
the vital product data on the managed system. This 6 digit prefix comes with the purchase
of the managed system and includes the ability to use 32,000 pairs of WWPNs. (If you run
out of WWPNs, you must obtain an activation code for an additional set of 32,000 pairs.)
When adding a virtual Fibre Channel adapter, a pair of WWPNs is generated and assigned.
If you add the adapter with a DLPAR operation and later you shutdown the partition, the
WWPN pair that was assign cannot be reused. A new WWPN will be assigned if you add
the virtual Fibre Channel adapter back. To not waste the WWPNs, use the Save Current
Configuration task to save the current configuration after a DLPAR operation in which a
virtual Fibre Channel adapter is added. This creates a new profile and you will have to give
it a unique name. Later, you can rename the profiles if desired or delete profiles that are no
longer needed.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-29


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Map Virtual Server Fibre Channel Adapter


List the available NPIV capable ports with the lsnports
command
$ lsnports
name physloc fabric tports aports swwpns awwpns
fcs0 U789D.001.DQDMLMP-P1-C1-T1 1 64 64 2048 2047
fcs1 U789D.001.DQDMLMP-P1-C1-T2 1 64 64 2048 2047

Map a virtual FC server adapter to a port on the physical HBA


with the vfcmap command
Example where vfchost0 is the virtual server adapter name and fcs0
is the physical Fibre Channel port:
$ vfcmap vadapter vfchost0 fcp fcs0

Or, use the Configuration -> Virtual Resources -> Virtual


Storage Management task to map FC server adapter to an FC
port in HMC GUI
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-24. Map Virtual Server FibreChannel Adapter QV5721.0

Notes:
The lsnports command will list NPIV capable ports on your system. Use these
arguments to expand the command output:
- fabric shows whether the port has fabric support (1)
- tports is total number of NPIV ports
- aports is number of available NPIV ports
- swwpns is total number of target worldwide port names supported
- awwpns is the number of target worldwide port names available
If no parameter is specified after the -fcp flag in the vfcmap command, the command
un-maps the virtual Fibre Channel adapter from the physical Fibre Channel port. For
example: $ vfcmap -vadapter vfchost0 -fcp

3-30 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

View Fibre Channel Mapping


Use the lsmap all npiv command for NPIV mapping
information
$ lsmap -all npiv
Name Physloc ClntID ClntName CIntOS
========= ========================== ====== =========== ======
vfchost1 U8203.E4A.10D4461-V2-C14 15 LPAR1 AIX

Status:LOGGED_IN
FC name:fcs0 FC loc code:U789C.001.DQD1760-P1-C2-T1
Ports logged in:2
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:fcs1VFC client DRC:U8203.E4A.10D4461-V15-C6-T1

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-25. View Fibre Channel Mapping QV5721.0

Notes:
To view NPIV virtual mapping, use the lsmap command.
The -npiv flag to lsmap is used to display the server binding information between the
virtual Fibre Channel and the physical Fibre Channel adapter. It is also used to display
client adapter attributes that are being sent to the server adapter.
The VFC client DRC means the virtual Fibre Channels client Dynamic Reconfiguration
Connection (DRC) identification.

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-31


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Checkpoint
1. The VIO server command will configure
attached devices and make them available.
2. To list all configured disks under the VIO server, use the
command .
3. True/False: To use NPIV, you must use an 8 Gb Fibre
Channel HBA assigned to the VIO server.
4. When you use NPIV, the virtual Fibre Channel HBA is
provided a unique from the Hypervisor
5. To view available NPIV ports, use the
command

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-26. Checkpoint QV5721.0

Notes:

3-32 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Exercise 3: Virtual I/O Server


In this exercise, you will:
Identify installed and configured Fibre Channel resources on a VIO
server
Configure a vSCSI device using a LUN as a backing device
Make the new vSCSI device available to your AIX LPAR

Optional exercise steps (based on equipment availability)


Create a vFCP device

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-27. Exercise 3: Virtual I/O Server QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 3. Virtual I/O Server 3-33


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Summary
The VIO server provides virtual resources, including Fibre
Channel devices, to logical partitions
The configuration of Fibre Channel devices within a VIO
server is no different than under AIX
You can use VIO server commands, or AIX commands to
configure and manipulate Fibre Channel devices
Node Port ID Virtualization (NPIV) enables virtual Fibre
Channel adapters for logical partitions

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 3-28. Unit Summary QV5721.0

Notes:

3-34 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty Unit 4. BladeCenter SAN

What this unit is about


This unit describes the IBM BladeCenter environment, specifically how
Fibre Channel devices are utilized within this architecture

What you should be able to do


After completing this unit, you should be able to:
Describe IBM BladeCenter architecture
Define IBM BladeCenter SAN I/O structure
Navigate Integrated Virtualization Manager
- Identify physical and virtual fibre channel devices

How you will check your progress


Accountability:
Review Questions
Machine exercises

References

http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
http://www-03.ibm.com/systems/bladecenter/hardware/servers/
BladeCenter Servers
http://www-03.ibm.com/systems/bladecenter/hardware/openfabric/
fibrechannel.html
BladeCenter Open Fabric

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Describe IBM BladeCenter architecture
Define IBM BladeCenter SAN I/O structure
Navigate Integrated Virtualization Manager
Identify physical and virtual Fibre Channel devices

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-1. Unit Objectives QV5721.0

Notes:

4-2 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty 4.1. BladeCenter Environment

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 1: BladeCenter Environment


After completing this topic, you should be able to:
Describe IBM BladeCenter architecture
Define IBM BladeCenter SAN I/O structure

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-2. Topic 1: BladeCenter Environment QV5721.0

Notes:

4-4 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

BladeCenter Environment
BladeCenter S
BladeCenter H
BladeCenter E
BladeCenter T
BladeCenter HT

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-3. BladeCenter Environment QV5721.0

Notes:
Beginning with the original BladeCenter chassis (the BC-E), the IBM BladeCenter product
family has offered customers a small footprint solution that support multiple processor and
I/O architectures, as well as multiple operating systems.
- BladeCenter S (BC-S) - Small office solution. Supports up to 12 locally attached disk
drives, enabling a small form factor server farm.
- BladeCenter E (BC-E) - First generation Blade Server platform.
- BladeCenter T (BC-T) - BC-E chassis modified for Telecomm industry.
- BladeCenter H (BC-H) - Second generation chassis. Supports all Blade Servers.
- BladeCenter HT (BC-HT) - BC-H chassis modified for Telecomm industry.

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

BladeCenter Architecture Components


BladeServer
Blade, Expansion card

BladeCenter
Chassis, I/O Module

Blade Server BladeCenter Network(s)

Switches

Hosts

Routers

Gateways

etc.

Blade Expansion card Chassis I/O Module

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-4. BladeCenter Architecture Components QV5721.0

Notes:
To access a Fibre Channel network from a Power Blade, you will need two key
components. The first is an expansion card that is installed into the Power Blade. The
second component is an I/O Module that is installed in the BladeCenter chassis. Once you
have these in place, you can configure an I/O path.

4-6 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Power Blades Current Offerings


Power 6
JS12
JS23
JS43

Power 7
P700
P701
p702

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-5. Power Blades - Current Offerings QV5721.0

Notes:
The Power Blades available at the time of this writing are shown in this visual. Offered in
both Power 6 and Power 7 processor models, from 2 to 32 core configurations. The visual
above shows both single and dual slot models (32-core models require 2 slots).

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Fibre Channel and IBM BladeCenter


Expansion cards
4 Gb
Emulex (CIOv)
QLogic (CIOv)
8 Gb
QLogic (CIOv)
QLogic (CFFh) Combo card

I/O Modules
Switch
Brocade 8 Gb 10 and 20 port
Cisco 4 Gb 10 and 20 port
QLogic 8 Gb 20 port
Pass-thru
QLogic

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-6. Fibre Channel and IBM BladeCenter QV5721.0

Notes:
You can order a Power Blade with either a 4 Gb or 8 Gb expansion card. Expansion cards
are available in two different types. Each type activates a different type of I/O Module.
Fibre Channel I/O Modules are either a switch module (which can be connected directly to
a host, or other switches), or a pass-thru module (which can connect directly to a host in a
point-to-point topology), or to a switch to provide a fabric topology.
To get MPIO functionality to a Blade server you need two I/O Modules.
Note: CIOv form factor connects to primary I/O Modules, and CFFh form factor
connects to high speed I/O Modules
For more information, refer to:
http://www-03.ibm.com/systems/bladecenter/hardware/openfabric/fibrechannel.html

4-8 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Accessing IBM BladeCenter


BladeCenter operations are performed via the Management
Module (MM or AMM)
Access is gained via GUI or CLI

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-7. Accessing IBM BladeCenter QV5721.0

Notes:
This visual depicts the GUI logon screen to an Advanced Management Module (AMM) of a
BladeCenter chassis.

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Monitors Hardware Vital Product Data (VPD)


Vital Product Data (VPD) provides important information
about your installed resources
Inventory
Including Part Number,
FRU number, UUID
Ports
WWN

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-8. Monitors - Hardware Vital Product Data (VPD) QV5721.0

Notes:
You can gather Vital Product Data (VPD) about your Power blade via the Monitors ->
Hardware VPD menu option.

4-10 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Blade Configuration Open Fabric Manager


Ability to pre-configure over 11,000 LAN and SAN
connections once for each blade server
Supports up to 256 chassis and up to 3,584 blade servers
from a single Advanced Management Module
Works with all Ethernet, Fibre Channel and SAS switch
modules; across all chassis and most x86 and Power
processor-based blade servers
Web-based user interface based on IBM Systems Director 6.2
for easy server and switch set-up, deployment and
management
Provides automated I/O failover to standby blades

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-9. Blade Configuration - Open Fabric Manager QV5721.0

Notes:
BladeCenter Open Fabric Manager is part of a comprehensive management solution for
IBM BladeCenter. It simplifies blade administration and provides SAN/LAN management,
including virtualized I/Othe simplification of I/O addressing and failover. BladeCenter
Open Fabric Manager is the management tool that makes it simple to get the most from
your I/O. The suite runs on the Advanced Management Module so you get a single
interface for both server administration and SAN/LAN administration. Open Fabric
Manager is suitable from SMB to enterprise. It works with all BladeCenter Ethernet and
Fibre Channel switches and fabricsCisco, Nortel Brocade and QLogicand can help
reduce the time it takes you to deploy servers, data and storage to minutes or hours
instead of days or weeks.

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

I/O Module Tasks Admin/Power/Restart


I/O Modules provide data paths from Blade to outside of
chassis
Each module is managed
independently

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-10. I/O Module Tasks - Admin/Power/Restart QV5721.0

Notes:
You can deactivate external ports on an I/O Module via the Advanced Setup menu option.
This is an easy trap to fall into, so when troubleshooting we always make sure we have not
mistakenly turned the external ports off!
Note: The menu option deals with external ports. Remember that the I/O Module also
has internal ports, since it connects to any expansion card installed in up to 14 Blade
Servers. There is no option to turn those ports off.

4-12 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

I/O Module Tasks Configuration


Advanced configuration allows for configuration options such
as direct IP access

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-11. I/O Module Tasks - Configuration QV5721.0

Notes:
You can access an I/O Module for maintenance or configuration either via the AMM
interface, or in some cases directly. In the case of switch modules, you can configure an IP
address for the module, and then access it directly.

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

I/O Module Tasks Firmware Update


Some I/O Modules have firmware component
Firmware levels can play a role in device operation

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-12. I/O Module Tasks - Firmware Update QV5721.0

Notes:
If firmware can be updated on a I/O Module, you may need to perform this process
occasionally. Some Fibre Channel modules will require firmware updates.

4-14 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty 4.2. Integrated Virtualization Manager

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Topic 2: Integrated Virtualization Manager


After completing this topic, you should be able to:
Navigate Integrated Virtualization Manager
Identify physical and virtual Fibre Channel devices

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-13. Topic 2: Integrated Virtualization Manager QV5721.0

Notes:

4-16 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Operating System Options


AIX
VIO Server
Integrated Virtualization manager
IBM I
Linux

UNIX Software Service Enablement Copyright IBM Corporation 2011 5.4

Figure 4-14. Operating System Options QV5721.0

Notes:
The same operating systems you are familiar with on stand-alone Power Servers also work
on Power Blades.

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Integrated Virtualization Manager


To leverage virtualization capabilities of a Power server, the
VIO Server code is loaded directly to the Blade
The result is Integrated Virtualization Manager (IVM)

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-15. Integrated Virtualization Manager QV5721.0

Notes:
Blade servers do not attach to a HMC platform. Since we may need a way to leverage the
virtualization capability of the server, we would load the VIO Server operating system
directly to the Power Blade (VIO Server can be loaded directly to a stand-alone Power
Server as well, though it is only common on the smallest server). Once loaded, you can
configure LPARs, and load AIX, Linux, or IBM I.

4-18 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

View/Modify Partitions Physical Adapters


Partition properties will show physical device attributes

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-16. View/Modify Partitions - Physical Adapters QV5721.0

Notes:
In this visual we see the properties of a physical Fibre Channel adapter. This is actually the
expansion card we looked at in the previous topic. The I/O Module is down-stream from the
Blade Server, so it does not appear in this configuration.

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

View/Modify Partitions Storage Virtual FC


Virtual Fibre Channel properties will show WWN information,
and connection path to physical adapter

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-17. View/Modify Partitions - Storage - Virtual FC QV5721.0

Notes:
If you configure NPIV, and assign a virtual fcs device to a logical partition, you can view the
properties from the Storage tab. In this visual, you can see there are two WWPN groups,
though only one is assigned to an physical adapter.

4-20 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

View Virtual Fibre Channel


Outside of partition properties, you can also gather
configuration data from the menu option View Virtual Fibre
Channel

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-18. View Virtual Fibre Channel QV5721.0

Notes:
Our previous visual showed the properties of a specific LPAR, and the virtual WWPNs
assigned. You can gather this information via another main menu option. In this visual we
see the View Virtual Fibre Channel option, and the sub-menu it provides.
Note: There is no menu option to see LUN devices that are configured to these
WWPNs. To gather that information you will need to access a terminal window, and use
a respective CLI tool.

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Physical Device Information AIX CLI


The same information is provided as with a stand-alone
Power server
## lscfg
lscfg -vl
-vl fcs0
fcs0
fcs0
fcs0 U78A5.001.WIHA986-P1-C11-T1
U78A5.001.WIHA986-P1-C11-T1 8Gb
8Gb PCIe
PCIe FC
FC Blade
Blade
Expansion
Expansion Card
Card (df1000f1df1024f1)
(df1000f1df1024f1)

Part
Part Number.................46M6142
Number.................46M6142
Serial
Serial Number...............11S46M6142YK50200338BS
Number...............11S46M6142YK50200338BS
EC Level....................A
EC Level....................A
Customer
Customer Card
Card ID
ID Number.....2B3A
Number.....2B3A
Manufacturer................001B
Manufacturer................001B
FRU
FRU Number..................46M6138
Number..................46M6138
Device
Device Specific.(ZM)........3
Specific.(ZM)........3
Network
Network Address.............10000000C9923334
Address.............10000000C9923334
.. .. ..
Hardware
Hardware Location
Location Code......U78A5.001.WIHA986-P1-C11-T1
Code......U78A5.001.WIHA986-P1-C11-T1

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-19. Physical Device Information - AIX CLI QV5721.0

Notes:
If you have loaded AIX to your Power Blade, you will not need to learn new commands to
gather information. In this visual we see the standard lscfg command, and output you
should be familiar with.

4-22 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Virtual Device Information AIX CLI


Less information is provided than a physical resource, though
the critical information, WWN, is still there
## lscfg
lscfg -vl
-vl fcs0
fcs0
fcs0
fcs0 U8406.70Y.06CAEBA-V2-C5-T1
U8406.70Y.06CAEBA-V2-C5-T1 Virtual
Virtual Fibre
Fibre Channel
Channel
Client
Client Adapter
Adapter

Network
Network Address.............C05076037D160000
Address.............C05076037D160000
ROS
ROS Level
Level and
and ID............
ID............
.. .. ..
Device
Device Specific.(Z4)........
Specific.(Z4)........
Device
Device Specific.(Z5)........
Specific.(Z5)........
Device
Device Specific.(Z6)........
Specific.(Z6)........
Device Specific.(Z7)........
Device Specific.(Z7)........
Device
Device Specific.(Z8)........C05076037D160000
Specific.(Z8)........C05076037D160000
Device
Device Specific.(Z9)........
Specific.(Z9)........
Hardware
Hardware Location
Location Code......U8406.70Y.06CAEBA-V2-C5-T1
Code......U8406.70Y.06CAEBA-V2-C5-T1

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-20. Virtual Device Information - AIX CLI QV5721.0

Notes:
When looking at a virtual fcs device, the big difference we see is there is far less
information provided. What we care most about though is the WWPN (the Network
address).

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-23


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

NPIV Addressing Switch View


When NPIV is enabled, the down-stream switch must support
the functionality
Show.Port>
Show.Port> status
status
Port
Port State
State Type
Type Attached
Attached WWN
WWN Beaconing
Beaconing Reason
Reason
----
---- ------------------
------------------ ------
------ ----------------------- --------- ------
----------------------- --------- ------
00 Online
Online fPort
fPort 20:06:00:A0:B8:12:AE:2C
20:06:00:A0:B8:12:AE:2C Disabled
Disabled
.. .. ..
16
16 No
No Light
Light gxPort
gxPort None
None Disabled
Disabled
17
17 Online
Online fPort
fPort MULTIPLE
MULTIPLE (NPIV)
(NPIV) Disabled
Disabled
.. .. ..

Show.Port>
Show.Port> node
node 17
17
Port
Port BB Crdt RxFldSz
BB Crdt RxFldSz COS
COS Port
Port Name
Name Node
Node Name
Name
----
---- -------
------- -------
------- ---
--- -----------------------
----------------------- -----------------------
-----------------------
17
17 20
20 2048
2048 2-3
2-3 10:00:00:00:C9:92:33:34
10:00:00:00:C9:92:33:34 20:00:00:00:C9:92:33:34
20:00:00:00:C9:92:33:34
17
17 20
20 2048
2048 2-3
2-3 C0:50:76:03:7D:16:00:00
C0:50:76:03:7D:16:00:00 C0:50:76:03:7D:16:00:00
C0:50:76:03:7D:16:00:00

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-21. NPIV Addressing - Switch View QV5721.0

Notes:
This visual shows an example of a fabric switch, and how the NPIV setting is enabled. In
this example, port 17 is NPIV enabled. When we look at port 17 specifically, we see not
only the physical adapter WWPNs, but also the virtual WWPNs.

4-24 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Checkpoint
1. True/False: You need to install an expansion card for Fibre
Channel connectivity to a Blade Server.
2. What two I/O modules options are available to provide
Fibre Channel access to a Blade Server?
___________________________________________
___________________________________________

3. Under IVM, what LPAR owns the physical adapter when


creating virtual Fibre Channel devices? _____________
4. True/False: You can see virtual disks (LUNs) that are
assigned to a LPAR via the IVM GUI.

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-22. Checkpoint QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-25


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Exercise 4: BladeCenter Demonstration


In this demonstration, you will see:
BladeCenter management techniques
Integrated Virtualization Manager operations

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-23. Exercise 4: BladeCenter Demonstration QV5721.0

Notes:

4-26 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4
Student Notebook

Uempty

Unit Summary
IBM BladeCenter provides an excellent platform for many
technologies, including Fibre Channel
From a Power Blade, you can take advantage of todays Fibre
Channel options, like 8 Gb speed and NPIV functionality

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 4-24. Unit Summary QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 4. BladeCenter SAN 4-27


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

4-28 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty Unit 5. SAN Performance Monitoring

What this unit is about


This unit focuses on SAN related aspects of AIX performance utilizing
AIX commands to gather and analyze system activity.
This unit is not "fibre only". Instead it gives the opportunity to perform
general AIX steps against FC devices side by side to vSCSI devices
(and in the case of VIO, direct-attached devices).

What you should be able to do


After completing this unit, you should be able to:
Identify the I/O layers where queuing is handled
View and change a FC disk and FC disk adapters tuning attributes
Monitor and tune the queue depth of disks and disk adapters
Identify the filemon reports that display I/O activity
Test I/O throughput using:
- The time and dd commands
- The ndisk program (part of the nstress package)

How you will check your progress


Accountability:
Review Questions
Machine exercises

References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
TD105745 IBM Technical Document: AIX Disk Queue Depth
Tuning for Performance

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Identify the I/O layers where queuing is handled
View and change a FC disk and FC disk adapters tuning
attributes
Monitor and tune the queue depth of disks and disk adapters
Identify the filemon reports that display I/O activity
Test I/O throughput using:
The time and dd commands
The ndisk program (part of the nstress package)

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-1. Unit Objectives QV5721.0

Notes:

5-2 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Introduction
Performance tuning has changed over time
From simple locally attached disk performance issues
Toward evaluation of more complex interactions and configurations,
including Fibre Channel and storage arrays
Virtual storage on the AIX client can be manipulated using LVM just
like a normal physical disk
Caution: The virtual disk on the client may already be a logical
volume on the server
LVM features such as mirroring and striping may be implemented on
the client
Performance considerations using dedicated storage still apply to
using virtual storage, such as spreading out hot logical volumes
The client needs to know what types of backing storage it is using to
make informed decisions

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-2. Introduction QV5721.0

Notes:
Storage from a storage array or VIO server can be manipulated using the Logical Volume
Manager (LVM) just like a physical volume. The adapter can use these devices like any
other physically connected hdisk device for boot, swap, mirror, or any other supported AIX
feature.
Performance considerations from dedicated storage are still applicable when using virtual
storage, such as spreading hot logical volumes across multiple disks on multiple adapters
so that parallel access is possible.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Trade-offs and Performance Approach


Trade-offs must be considered, such as
Cost versus performance
Conflicting performance requirements
Load balancing (speed) versus redundancy
Ease or cost of tuning and reconfiguring

In all cases, use a methodical approach for tuning


1. Understanding the factors which can affect performance and
collect baseline system statistics
2. Measure the current performance of the server
3. Identify a performance bottleneck
4. Change the component which is causing the bottleneck
5. Measure the new performance of the system to check for
improvement
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-3. Trade-offs and Performance Approach QV5721.0

Notes:
There are many trade-offs related to performance tuning that should be considered. The
key is to ensure there is a balance between them.
The trade-offs include:
Cost versus performance: In some situations, the only way to improve performance is
by using more or faster hardware.
Conflicting performance requirements: There may be conflicting performance
requirements between applications running in multiple LPARs.
Speed versus functionality: Resources added for redundancy or availability may
adversely affect performance. In addition, resources added to improve one
performance bottleneck (example: more Ethernet adapters) may adversely affect
another area (example: consumes more CPU).
Baseline values provide data for comparison later when performance tuning is needed.
Collections over time may show trends to determine when future tuning may be needed.

5-4 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Performance Metrics and Baseline


Performance is measured through analysis tools
Metrics that are measured include
CPU utilization
Memory utilization and paging
Disk I/O
Network I/O
Each metric can be subdivided into finer details
Create a baseline measurement on a representative workload to
compare against in the future

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-4. Performance Metrics and Baseline QV5721.0

Notes:
CPU utilization can be split into %user, %system, %idle, and %IOwait. Other CPU metrics
can include the length of the run queues, process/thread dispatches, interrupts, and lock
contention statistics.
Memory metrics include virtual memory paging statistics, file paging statistics, and cache
and TLB miss rates.
Disk metrics include disk throughput (kilobytes read/written), disk transactions
(transactions per second), disk adapter statistics, disk queues (if the device driver and tools
support them), and elapsed time caused by various disk latencies. The type of disk access,
random versus sequential, can also have a big impact on response times.
Network metrics include network adapter throughput, protocol statistics, transmission
statistics, network memory utilization, and much more.
You should create a baseline measurement when your system is running well and under a
normal load. This will give you a guideline to compare against when your system seems to
have performance problems.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Determine the Type of the Problem


Determine the type of the problem
Is it a functional problem or purely a performance problem?
Is it a trend or a sudden issue?
Is the problem only at certain times?
What do you do when someone reports a performance problem?
Know the nature of the problem
Gather data and compare against the baseline
Use AIX tools
Use PerfPMR
Document statistics regularly to spot trends for capacity
planning
Document statistics during high workloads

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-5. Determine the Type of the Problem QV5721.0

Notes:
A functional problem is when the application, hardware or network is not behaving properly.
A performance problem is when the functions are being achieved but the performance is
slow. Sometimes functional problems lead to performance problems. In these cases, rather
than tune the system, it is more important to determine the root cause of the problem and
fix it.
It is quite common for support personnel to receive a problem report in which all it says is
that someone has a performance problem on the system and here is some data for you to
analyze. This little information is not enough to accurately determine the nature of a
performance problem.
It is important to collect a variety of data that show statistics regarding the various system
components. In order to make this easy, a set of tools supplied in a package called
PerfPMR is available on a public ftp site. The following URL can be used to download your
version using a web browser:
ftp://ftp.software.ibm.com/aix/tools/perftools/perfpmr

5-6 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

AIX I/O Stack

Application

Logical File System

Raw Disks

Raw LVs
JFS JFS2 NFS Other

VMM

LVM

Multi-path I/O Driver (optional)

Disk Device Drivers


Adapter Device Drivers

Disk Subsystem (Optional)

Disk

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-6. AIX I/O Stack QV5721.0

Notes:
Application memory area caches data to avoid I/O.
NFS caches file attributes and has a cached file system for NFS clients.
JFS and JFS2 cache use extra system RAM. JFS uses persistent pages for cache and
JFS2 uses client pages for cache.
Queues exist for both adapters and disks.
Adapter device drivers use Direct Memory Access (DMA) for I/O.
Disk subsystems have read and write caches.
Disks have memory to store commands and data.
I/O operations can be coalesced into fewer I/Os, or broken up into more I/Os as they go
through the I/O stack. I/Os adjacent in a file, logical volume, and disk can be coalesced.
I/Os greater than the maximum I/O size supported will be split up.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

LVM Terminology
Application Raw
JFS/JFS2
Layer Logical Volume

Volume
Group Logical Volume Logical Volume

Logical Logical Logical Volume Device Driver (LVDD)


Layer Volume
Manager
Physical Physical Physical
Volume Volume Volume

Device Driver Device Driver

Physical
Layer
Physical Physical
Physical Disk Disk
Disk
Array

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-7. LVM Terminology QV5721.0

Notes:
The logical volume layer is between the application and physical layers. The application
layers are the file system or raw logical volumes. The physical layer are the physical disks.
LVM maps the data between application layer and physical storage. Even physical volumes
are part of the logical layer, as the physical layer only contains the actual disks, device
drivers, and disk arrays that may already be configured.
The physical disk drives, storage arrays or virtual disks are named as a physical volumes in
LVM. All of the physical volumes in a volume group are divided into physical partitions. All
the physical partitions within a volume group are the same size, although different volume
groups can have different physical partition sizes. A volume group is made up of one or
more physical volumes. Within each volume group, one or more logical volumes are
defined. Logical volumes are groups of information located on physical volumes. Each
logical volume consists of one or more logical partitions. Logical partitions are the same
size as the physical partitions within a volume group. Each logical partition is mapped to
one, two or three physical partitions.

5-8 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Queuing I/Os
I/Os are queued to improve throughput

I/Os are queued at several layers in the I/O stack


File system
LVM
Multipath I/O
Disk driver
FC adapter
Disk subsystem
Disk

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-8. Queueing I/Os QV5721.0

Notes:
I/Os are queued to improve throughput. If there was a virtual disk (or LUN) is backed by
multiple physical disks and only one I/O could be submitted at a time, the I/O service time
would be good but the throughput would be poor.
By submitting multiple I/Os to a physical disk, the disk can minimize actuator movement
and get better throughput than is possible by submitting one I/O at a time.
If the number of I/O requests exceed the allowed limit, they will reside in a wait queue until
the resource becomes available. The I/Os being serviced will be in a process queue.
File system layer: buffers limit the maximum number of in flight I/Os for each file system
LVM layer: disk buffers limit the number of in flight I/Os
Multipath I/O layer: I/Os are queued if the device driver allows it
Disk device driver: maximum number of in flight I/Os defined by queue_depth attribute
FC adapters: maximum number of in flight I/Os are specified by num_cmd_elems
attribute
Disk subsystem: queue I/Os themselves
Disk: can accept multiple I/O requests

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Disk Drive Queue Depth


The disk drive queue depth is the maximum number of requests the
disk device can hold in its queue
Display the valid values for a disks queue depth
# lsattr -Rl diskname a queue_depth

View the current queue_depth value


On an AIX client
# lsattr El hdiskX -a queue_depth
On the VIO server
$ lsdev dev hdiskX attr queue_depth

Change the queue_depth value with the chdev command


To change, disks must not already be in use
If in use, use chdev -P to change at next reboot
On an AIX client
# chdev l hdiskX a queue_depth=#
On the VIO server
# chdev dev hdiskX attr queue_depth=#

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-9. Disk Drive Queue Depth QV5721.0

Notes:
The operating system has the ability to enforce limits on the number of I/O requests that
can be outstanding from the SCSI adapter to a given SCSI bus or disk drive. These limits
are intended to exploit the hardware's ability to handle multiple requests while ensuring that
the seek-optimization algorithms in the device drivers are able to operate effectively.
For non-IBM devices, it is sometimes appropriate to modify the default queue-limit values
that have been chosen to handle the worst possible case.
The default queue_depth and valid values differs by the manufacturer and type of storage.

5-10 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Changing the Disk Queue Depth - Example


## lsattr
lsattr -El
-El hdisk1
hdisk1 -a
-a queue_depth
queue_depth
queue_depth
queue_depth 88 Queue
Queue DEPTH
DEPTH True
True
## chdev
chdev -l
-l hdisk1
hdisk1 -a
-a queue_depth=16
queue_depth=16
Method
Method error
error (/usr/lib/methods/chgdisk):
(/usr/lib/methods/chgdisk):
0514-062
0514-062 Cannot
Cannot perform
perform the
the requested
requested function
function because
because the
the
specified
specified device
device is
is busy.
busy.

## mount
mount
node
node mounted
mounted mounted
mounted over
over vfs
vfs date
date options
options
--------
-------- ---------------
--------------- ---------------
--------------- ------
------ ------------
------------ ---------------
---------------
...
...
/dev/fslv00
/dev/fslv00 /myfs
/myfs jfs2
jfs2 MayMay 18
18 17:49
17:49 rw,cio,log=/dev/loglv00
rw,cio,log=/dev/loglv00
## umount
umount /myfs
/myfs
## chdev
chdev -l
-l hdisk6
hdisk6 -a
-a queue_depth=16
queue_depth=16
Method
Method error
error (/usr/lib/methods/chgdisk):
(/usr/lib/methods/chgdisk):
0514-062
0514-062 Cannot
Cannot perform
perform the
the requested
requested function
function because
because the
the
specified
specified device
device is
is busy.
busy.

## varyoffvg
varyoffvg testvg
testvg
## chdev
chdev -l
-l hdisk1
hdisk1 -a
-a queue_depth=16
queue_depth=16
hdisk1
hdisk1 changed
changed
## varyonvg
varyonvg testvg
testvg
## mount
mount -o
-o cio
cio /myfs
/myfs

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-10. Changing the Disk Queue Depth - Example QV5721.0

Notes:
When you want to change the queue_depth of a device, the disk must not be in use. You
can use the chdev -P flag and have the new value be in effect on the next boot. If you want
to change it without rebooting, you must close any open logical volumes and varyoff the
volume group.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Monitoring Disk Queues with iostat D


The iostat D flag displays an extended disk utilization report
Example:

iostat D
## iostat D hdisk12
hdisk12 33

System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 paths=27
paths=27 vdisks=7
vdisks=7

hdisk12
hdisk12 xfer:
xfer: %tm_act
%tm_act bps
bps tps
tps bread
bread bwrtn
bwrtn
100.0
100.0 2.0M
2.0M 497.3
497.3 1.0M
1.0M 1.0M
1.0M
read:
read: rps
rps avgserv
avgserv minserv
minserv maxserv
maxserv timeouts
timeouts fails
fails
247.7
247.7 4.7
4.7 0.4
0.4 38.6
38.6 00 00
write:
write: wps
wps avgserv
avgserv minserv
minserv maxserv
maxserv timeouts
timeouts fails
fails
249.7
249.7 7.4
7.4 1.0
1.0 48.8
48.8 00 00
queue:
queue: avgtime
avgtime mintime
mintime maxtime
maxtime avgwqsz
avgwqsz avgsqsz
avgsqsz sqfull
sqfull
14.1
14.1 0.0
0.0 42.2
42.2 6.0
6.0 3.0
3.0 497.3
497.3

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-11. Monitoring Disk Queues with iostat -D. QV5721.0

Notes:
The queue statistics are:
avgtime: Average time (ms) spent by a transfer request in the wait queue
mintime: Minimum time (ms) spent by a transfer request in the wait queue
maxtime: Maximum time (ms) spent by a transfer request in the wait queue
avgwqsz: Average wait queue size
avgsqsz: Average service queue size
sqfull: Number of times the service queue becomes full (that is, the disk is not
accepting any more service requests) per second
If the queue statistics are all zero, the number of I/O operations are not being limited by
queue_depth.so there is no need to adjust queue_depth. If the queue statistics are not
zero, then increase queue_depth unless the average read and write service times (read
avgserv and write avgserv) are higher in intervals where the avgsqsz is higher. In that
case, disk subsystem performance is likely degrading because too many I/O operations are
being driven to the LUN simultaneously and reducing queue_depth will probably improve
performance.

5-12 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Virtual SCSI Queue Depth Considerations


A disk queue depth is the number of I/O requests that can be run in parallel on a
device
The virtual SCSI queue depth is how many requests the disk controller will queue to
the virtual SCSI client driver at any one time
Recommendations for performance
Physical volumes used as backing storage: make virtual SCSI queue depth
equal to the physical volume queue depth
Logical volumes used as backing storage: the physical volume's queue depth
should be greater than or equal to the sum of queue depths for all the virtual
disks accessing that physical volume
Adjusting the queue depth may improve disk performance
Performance impact will depend on workload
Example: Increase virtual disk queue depth to match physical volume
Adjusting the queue depth may reduce wasted resources
Example: If virtual disk queue depth is too high and it is reduced to match
physical volume

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-12. Virtual SCSI Queue Depth Considerations QV5721.0

Notes:
Increasing the queue depth on a client virtual device reduces the number of supported
open devices on that virtual adapter, and the number of I/O requests that devices can have
active on the VIO server.
The VSCSI queue depth generally should not be any larger than the queue depth on the
physical LUN. A larger value wastes resources without additional performance.
If the virtual target device is a logical volume, the queue depth on all disks included in that
logical volume must be considered. If the logical volume is being mirrored, the virtual SCSI
client queue depth should not be larger than the smallest queue depth of any physical
device being used in a mirror. When mirroring, throughput is effectively throttled to the
device with the smallest queue depth.
If a volume group on the client spans virtual disks, keep the same queue depth on all the
virtual disks in that volume group, especially when using mirroring.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

FC Disk Adapter Tuning Attributes


FC adapter may have a tunable command queue size
Queue size is used to limit how many I/Os can be queued at the adapter
Parameter name is usually called num_cmd_elements

Adapter may have a tunable Direct Memory Access (DMA) memory pool
Parameter name is usually lg_term_dma or lg_dma_mem

Maximum transfer size


Parameter name is max_xfer_size

## lsattr
lsattr -El
-El fcs0
fcs0
bus_intr_lvl
bus_intr_lvl 273
273 Bus
Bus interrupt
interrupt level
level False
False
bus_io_addr
bus_io_addr 0xff800
0xff800 Bus
Bus I/O
I/O address
address False
False
bus_mem_addr
bus_mem_addr 0xfff7e000
0xfff7e000 Bus
Bus memory
memory address
address False
False
init_link
init_link al
al INIT
INIT Link
Link flags
flags True
True
intr_priority
intr_priority 33 Interrupt
Interrupt priority
priority False
False
lg_term_dma
lg_term_dma 0x800000
0x800000 Long
Long term
term DMA
DMA True
True
max_xfer_size
max_xfer_size 0x100000
0x100000 Maximum
Maximum Transfer
Transfer Size
Size True
True
num_cmd_elems
num_cmd_elems 200
200 Maximum
Maximum number
number of
of COMMANDS
COMMANDS to
to queue
queue to
to the
the adapter
adapter True
True
pref_alpa
pref_alpa 0x1
0x1 Preferred
Preferred AL_PA
AL_PA True
True
sw_fc_class
sw_fc_class 22 FC
FC Class
Class for
for Fabric
Fabric True
True
tme
tme no
no Target
Target Mode
Mode Enabled
Enabled True
True

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-13. FC Disk Adapter Tuning Attributes QV5721.0

Notes:
The command queue size is the number of I/O commands that can be queued at the
adapter before the upper layer stops sending them. The attribute name is usually
num_cmd_elems. Generally, the more disk devices attached to the adapter, the larger the
queue size should be.
The DMA memory pool is where the adapter will allocate space from kernel memory when
the adapter is configured. The parameter name is usually lg_dma_mem or lg_term_dma.
The adapter may use a DMA buffer from this pool to send the I/O to the disk. If the pool is
exhausted, the I/O is delayed until a previously issued I/O has completed.
The maximum transfer size is specified by the max_xfer_size attribute. It also controls a
DMA memory area that is used to hold data for transfer. Changing to other allowable
values increases the adapters bandwidth.

5-14 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Viewing FC Disk Adapter Queues


Number of command elements guidance is dependent on storage
system and configuration
For IBM storage refer to the associated Redbook
For other vendors refer to their documentation

## fcstat
fcstat fcs0
fcs0
...
...
FC
FC SCSI
SCSI Adapter
Adapter Driver
Driver Information
Information
No DMA Resource Count: 4490
No DMA Resource Count: 4490 <-
<- Increase
Increase max_xfer_size
max_xfer_size
No
No Adapter
Adapter Elements
Elements Count:
Count: 105688
105688 <-
<- Increase
Increase num_cmd_elems
num_cmd_elems
No
No Command
Command Resource
Resource Count:
Count: 133
133 <-
<- Increase
Increase num_cmd_elems
num_cmd_elems
...
...

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-14. Viewing FC Disk Adapter Queues QV5721.0

Notes:
The FC SCSI adapter statistics include:
No DMA Resource Count
Displays the number of times DMA resources were not available.
No Adapter Elements Count
Displays the number of times there were no adapter elements available.
No Command Resource Count
Displays the number of times there were no command resources available.
With SDDPCM, use the pcmpath command. The I/O Maximum field of 200 with
num_cmd_elems=200 means the queue filled.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The filemon Utility


Reports the I/O activity of
Logical files (lf)
Logical volumes (lv)
Physical volumes (pv)
Virtual memory segments (vm)

Basic syntax
filemon -O report-types -o output-file
Runs in the background; stops with the trcstop command
Uses the trace facility

Reports have two types of information


Most Active statistics
Detailed statistics
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-15. The filemon Utility QV5721.0

Notes:
If an application is believed to be disk-bound, the filemon utility is useful to find out where
and why.
The filemon command uses the trace facility to obtain a detailed picture of I/O activity
during a time interval on the various layers of file system utilization, including the logical file
system, virtual memory segments, LVM, and physical disk layers. Data can be collected on
all the layers, or some of the layers. The default is to collect data on the virtual memory
segments, LVM, and physical disk layers.
By default, filemon runs in the background while other applications are running and being
monitored. When the trcstop command is issued, filemon stops and generates its report.
The report begins with a summary of the I/O activity for each of the levels (the Most Active
sections) and ends with detailed I/O activity for each level (Detailed sections). Each
section is ordered from most active to least active.
When running PerfPMR, the filemon data is in the filemon.sum file.

5-16 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

filemon - Most Active Files Report


## filemon
filemon -O
-O lv,lf,pv
lv,lf,pv -o
-o fmon.out
fmon.out

## dd
dd if=/lv1fs/bigfile1
if=/lv1fs/bigfile1 bs=1M
bs=1M of=/dev/null
of=/dev/null

## trcstop
trcstop

## cat
cat fmon.out
fmon.out
Wed
Wed Feb
Feb 11
11 23:08:09
23:08:09 2009
2009
System:
System: AIX
AIX 6.1
6.1 Node:
Node: leguin221
leguin221 Machine:
Machine: 00066BA2D900
00066BA2D900

Cpu
Cpu utilization:
utilization: 88.9%
88.9%
Cpu
Cpu allocation:
allocation: 0.8%
0.8%

Most
Most Active
Active Files
Files
-----------------------------------------------------------------------
-----------------------------------------------------------------------
#MBs
#MBs #opns
#opns #rds
#rds #wrs
#wrs file
file volume:inode
volume:inode
-----------------------------------------------------------------------
-----------------------------------------------------------------------
101.0
101.0 11 101
101 00 bigfile1
bigfile1 /dev/lv1:21
/dev/lv1:21
100.0
100.0 11 00 100
100 null
null
...
...

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-16. filemon - Most Active Files Report QV5721.0

Notes:
The visual on this page shows the logical file output (lf) from the filemon report. The
logical file I/O includes read, writes, opens and seeks which may or may not result in actual
physical I/O depending on whether or not the files are already buffered in memory.
Statistics are kept by file.
Output is ordered by #MBs read and/or written to a file.
By default, the logical file reports are limited to the top 20. If the verbose flag (-v) is added,
activity for all files would be reported. The -u flag can be used to generate reports on files
opened prior to the start of the trace daemon.
Look for the most active files to see usage patterns. If they are dynamic files, they may
need to be backed up and restored. The Most Active Files sections shows the bigfile1
file (read by dd command) as most active file with one open and 101 reads.
The number of writes (#wrs) is 1 less than the number of reads (#rds), because end-of-file
has been reached.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

filemon - Most Active LV and PV Reports


Most
Most Active
Active Logical
Logical Volumes
Volumes
------------------------------------------------------------------------
------------------------------------------------------------------------
util
util #rblk
#rblk #wblk
#wblk KB/s
KB/s volume
volume description
description
------------------------------------------------------------------------
------------------------------------------------------------------------
0.03
0.03 205016
205016 00 3306.1
3306.1 /dev/lv1
/dev/lv1 /lv1fs
/lv1fs
0.00
0.00 00 24
24 0.4
0.4 /dev/hd4
/dev/hd4 //
0.00
0.00 00 16
16 0.3
0.3 /dev/hd8
/dev/hd8 jfs2log
jfs2log

Most
Most Active
Active Physical
Physical Volumes
Volumes
------------------------------------------------------------------------
------------------------------------------------------------------------
util
util #rblk
#rblk #wblk
#wblk KB/s
KB/s volume
volume description
description
------------------------------------------------------------------------
------------------------------------------------------------------------
0.03
0.03 205016
205016 00 3306.1
3306.1 /dev/hdisk2
/dev/hdisk2 N/A
N/A
0.00
0.00 00 40
40 0.6
0.6 /dev/hdisk0
/dev/hdisk0 N/A
N/A

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-17. filemon - Most Active LV and PV Reports QV5721.0

Notes:
The filemon command monitors I/O operations on logical volumes. I/O statistics are kept
on a per-logical-volume basis. The logical volume with the highest utilization is at the top,
and the others are listed in descending order.
The filemon command monitors I/O operations on physical volumes. At this level, physical
resource utilizations are obtained. I/O statistics are kept on a per-physical-volume basis.
The disks are presented in descending order of utilization. The disk with the highest
utilization is shown first.

5-18 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

filemon - Detailed File Stats Report


------------------------------------------------------------------------
------------------------------------------------------------------------
Detailed
Detailed File
File Stats
Stats
------------------------------------------------------------------------
------------------------------------------------------------------------

FILE:
FILE: /lv1fs/bigfile1
/lv1fs/bigfile1 volume:
volume: /dev/lv1
/dev/lv1 (/lv1fs)
(/lv1fs) inode:
inode: 21
21
opens:
opens: 11
total
total bytes
bytes xfrd:
xfrd: 105906176
105906176
reads:
reads: 101
101 (0
(0 errs)
errs)
read
read sizes
sizes (bytes):
(bytes): avg
avg 1048576.0
1048576.0 min
min 1048576
1048576 max
max 1048576
1048576 sdev
sdev 0.0
0.0
read
read times
times (msec):
(msec): avg
avg 10.154
10.154 min
min 0.002
0.002 max
max 17.055
17.055 sdev
sdev 2.217
2.217

FILE:
FILE: /dev/null
/dev/null
opens:
opens: 11
total
total bytes
bytes xfrd:
xfrd: 104857600
104857600
writes:
writes: 100
100 (0
(0 errs)
errs)
write
write sizes
sizes (bytes):
(bytes): avg
avg 1048576.0
1048576.0 min
min 1048576
1048576 max
max 1048576
1048576 sdev
sdev 0.0
0.0
write
write times
times (msec):
(msec): avg
avg 0.003
0.003 min
min 0.003
0.003 max
max 0.005
0.005 sdev
sdev 0.000
0.000

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-18. filemon - Detailed File Stats Report QV5721.0

Notes:
The Detailed File Stats report is based on the activity on the interface between the
application and the file system. As such, the number of calls and the size of the reads or
writes reflects the application calls. The read sizes and write sizes will give you an idea of
how efficiently your application is reading and/or writing information.
In this example, the report shows the average read size is approximately 1 MB, which
matches the block size specified on the dd command on the previous visual.
The size used by an application has performance implications. For sequentially reading of
a large file, a larger read size will result in fewer read requests and thus lower CPU
overhead to read the entire file. When specifying an applications read or write block size,
using values which are a multiple of the page size which is 4 KB is recommended.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

filemon - Detailed PV Stats Report


------------------------------------------------------------------------
------------------------------------------------------------------------
Detailed
Detailed Physical
Physical Volume
Volume Stats
Stats (512
(512 byte
byte blocks)
blocks)
------------------------------------------------------------------------
------------------------------------------------------------------------

VOLUME:
VOLUME: /dev/hdisk2
/dev/hdisk2 description:
description: N/A
N/A
reads:
reads: 3242
3242 (0
(0 errs)
errs)
read
read sizes
sizes (blks):
(blks): avg
avg 63.2
63.2 min
min 88 max
max 64
64 sdev
sdev 6.0
6.0
read
read times
times (msec):
(msec): avg
avg 0.548
0.548 min
min 0.133
0.133 max
max 5.444
5.444 sdev
sdev 0.507
0.507
read
read sequences:
sequences: 65
65
read
read seq.
seq. lengths:
lengths: avg
avg 3154.1
3154.1 min
min 88 max
max 8192
8192 sdev
sdev 2972.6
2972.6
seeks:
seeks: 65
65 (2.0%)
(2.0%)
seek
seek dist
dist (blks):
(blks): init
init 28841008,
28841008,
avg
avg 166391.1
166391.1 min
min 512
512 max
max 581832
581832 sdev
sdev 119184.3
119184.3
seek
seek dist
dist (%tot
(%tot blks):init
blks):init 20.11593,
20.11593,
avg
avg 0.11605
0.11605 min
min 0.00036
0.00036 max
max 0.40581
0.40581 sdev
sdev 0.08313
0.08313
time
time to
to next
next req(msec):
req(msec): avg
avg 7.208
7.208 min
min 0.058
0.058 max
max 22246.806
22246.806 sdev
sdev 391.436
391.436
throughput:
throughput: 3306.1
3306.1 KB/sec
KB/sec
utilization:
utilization: 0.03
0.03

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-19. filemon - Detailed PV Stats Report QV5721.0

Notes:
As contrasted with the Detailed File States report, the Detailed Physical Volume Stats
report shows the activity at disk device driver. This report shows the actual number and
size of the reads and writes to the disk device driver. The file system uses VMM caching.
The default unit of work in VMM is the 4 KB page. But, rather than writing or reading one
page at a time, the file system tends to group work together to read or write multiple pages
at a time. This grouping of work can be seen in the physical volume read and write sizes
provided in this report.
Note that the sizes are expressed in blocks, where a block is the traditional Unix block size
of 512 bytes. To translate the sizes to KBs, divide the number by 2.

5-20 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Monitoring Adapter I/O Throughput


## iostat
iostat -a
-a

System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 ent=0.30
ent=0.30 paths=27
paths=27 vdisks=7
vdisks=7 tapes=0
tapes=0

tty:
tty: tin
tin tout
tout avg-cpu:
avg-cpu: %% user
user %% sys
sys %% idle
idle %% iowait
iowait physc
physc %% entc
entc
0.0
0.0 4.5
4.5 2.8
2.8 2.7 2.7 90.990.9 3.6
3.6 0.00.0 5.9
5.9

Adapter:
Adapter: Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
fcs0
fcs0 24.5
24.5 2.9
2.9 30177888
30177888 12054768
12054768

Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk5
hdisk5 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk1
hdisk1 0.0
0.0 5.7
5.7 1.4
1.4 9562224
9562224 195492
195492
...
...

Adapter:
Adapter: Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
fcs1
fcs1 10.3
10.3 3.3
3.3 15298408
15298408 2454535
2454535

Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk5
hdisk5 0.0
0.0 1.4
1.4 2.8
2.8 00 2408071
2408071
hdisk1
hdisk1 0.0
0.0 0.0
0.0 0.0
0.0 00 00
...
...

Vadapter:
Vadapter: Kbps
Kbps tps
tps bkread
bkread bkwrtn
bkwrtn partition-id
partition-id
vscsi0
vscsi0 318.4
318.4 53.7
53.7 35.8
35.8 17.9
17.9 11

Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk0
hdisk0 3.9
3.9 274.4
274.4 51.9
51.9 256939768
256939768 215286684
215286684
hdisk12
hdisk12 0.0
0.0 0.5
0.5 0.1
0.1 834861
834861 4008
4008
...
...

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-20. Monitoring Adapter I/O Throughput QV5721.0

Notes:
The -a option to iostat will combine the disks statistics to the adapter to which they are
connected. The adapter throughput will simply be the sum of the throughput of each of its
connected devices. With the -a option, the adapter will be listed first, followed by its
devices and then followed by the next adapter, followed by its devices, and so on. The
adapter throughput values can be used to determine if any particular adapter is
approaching its maximum bandwidth or to see if the I/O is balanced across adapters.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Monitoring System Throughput


## iostat
iostat -s
-s

System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 ent=0.30
ent=0.30 paths=27
paths=27 vdisks=7
vdisks=7

tty:
tty: tin
tin tout
tout avg-cpu:
avg-cpu: %% user
user %% sys
sys %% idle
idle %% iowait
iowait physc
physc %% entc
entc
0.0
0.0 4.5
4.5 2.8
2.8 2.72.7 90.9
90.9 3.6
3.6 0.0
0.0 5.9
5.9

System:
System: chandler231.beaverton.ibm.com
chandler231.beaverton.ibm.com
Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
Physical
Physical 309.9
309.9 58.3
58.3 303382746
303382746 229980398
229980398

Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk0
hdisk0 3.9
3.9 274.4
274.4 51.9
51.9 257015587
257015587 215314441
215314441
hdisk5
hdisk5 0.0
0.0 1.4
1.4 2.8
2.8 00 2408071
2408071
hdisk1
hdisk1 0.0
0.0 5.7
5.7 1.4
1.4 9562224
9562224 195512
195512
hdisk7
hdisk7 0.0
0.0 8.3
8.3 0.9
0.9 10940729
10940729 3377556
3377556
hdisk2
hdisk2 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk11
hdisk11 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk10
hdisk10 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk8
hdisk8 0.0
0.0 0.1
0.1 0.0
0.0 56002
56002 152538
152538
hdisk9
hdisk9 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk3
hdisk3 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk4
hdisk4 0.0
0.0 8.9
8.9 0.5
0.5 15298408
15298408 46484
46484
hdisk6
hdisk6 0.0
0.0 10.5
10.5 0.7
0.7 9674935
9674935 8481768
8481768
hdisk12
hdisk12 0.0
0.0 0.5
0.5 0.1
0.1 834861
834861 4028
4028

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-21. Monitoring System Throughput QV5721.0

Notes:
The -s option to iostat shows the system throughput. This is the sum of all the
adapters throughputs.

5-22 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Testing Sequential Throughput with time and dd


The time and dd commands can test sequential throughput to a
device or file
## time
time dd
dd if=/dev/rhdisk6
if=/dev/rhdisk6 of=/dev/null
of=/dev/null bs=1m
bs=1m count=1024
count=1024
1024+0
1024+0 records
records in.
in.
1024+0
1024+0 records
records out.
out.

real
real 0m6.92s
0m6.92s
user
user 0m0.01s
0m0.01s
sys
sys 0m0.36s
0m0.36s

Read throughput: 1024 MB/6.92 sec = 147.98 MB/sec


## time
time dd
dd if=/dev/zero
if=/dev/zero of=/dev/rhdisk6
of=/dev/rhdisk6 bs=1m
bs=1m count=1024
count=1024
1024+0
1024+0 records
records in.
in.
1024+0
1024+0 records
records out.
out.

real
real 0m16.52s
0m16.52s
user
user 0m0.01s
0m0.01s
sys
sys 0m0.61s
0m0.61s

Write throughput: 1024 MB/16.52 sec = 61.99 MB/sec


UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-22. Testing Sequential Throughput with time and dd. QV5721.0

Notes:
The time command prints the elapsed time during the execution of a command, time in the
system, and execution time of the time command in seconds.
The dd command reads the if parameter, does the specified conversions, then copies the
converted data to the of parameter. The input and output block size can be specified to
take advantage of raw physical I/O.
You can calculate the throughput by dividing the amount of data by the real time.
USE CAUTION when writing to an entire disk. It will destroy anything on the disk and may
make it unusable.

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-23


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Testing Throughput with ndisk


Do I/O to a file or raw logical volume or disk
Do I/O to multiple devices or files
Specify the number of threads doing I/O
You need a lot of threads doing I/O to stress a disk subsystem
Synchronous or asynchronous writes to file system files
Specify the I/O size or a set of I/O sizes
Specify the read/write ratio

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-23. Testing Throughput with ndisk. QV5721.0

Notes:
The ndisk tool (part of the nstress package available on the internet at
http://www.ibm.com/developerworks/wikis/display/WikiPtype/nstress) can be used to test
the throughput and stress the disk subsystem to see what it can handle.

5-24 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Example Using ndisk


## ndisk64
ndisk64 -f-f /dev/rhdisk6
/dev/rhdisk6 -R-R -r100
-r100 -b
-b 1m
1m -t10
-t10 -s1024M
-s1024M -M5
-M5
Command:
Command: ndisk64 -f /dev/rhdisk6 -R -r100 -b 1m -t10
ndisk64 -f /dev/rhdisk6 -R -r100 -b 1m -t10 -s1024M
-s1024M -M5
-M5
Synchronous
Synchronous Disk
Disk test
test (regular
(regular read/write)
read/write)
No.
No. of
of processes
processes == 55
I/O
I/O type
type == Random
Random
Block
Block size
size == 1048576
1048576
Read-Write
Read-Write == Read
Read Only
Only
Sync
Sync type:
type: none
none == just
just close
close the
the file
file
Number
Number of
of files
files == 11
File
File size
size == 1073741824
1073741824 bytes
bytes == 1048576
1048576 KB
KB == 1024
1024 MB
MB
Run
Run time
time == 10
10 seconds
seconds
Snooze
Snooze %% == 00 percent
percent
---->
----> Running
Running test
test with
with block
block Size=1048576
Size=1048576 (1024KB)
(1024KB) .....
.....
Proc
Proc -- <-----Disk
<-----Disk IO---->
IO----> || <-----Throughput------>
<-----Throughput------> RunTime
RunTime
Num
Num -- TOTAL
TOTAL IO/sec
IO/sec || MB/sec
MB/sec KB/sec
KB/sec Seconds
Seconds
11 -- 269
269 27.3
27.3 || 27.27
27.27 27922.93
27922.93 9.86
9.86
22 -- 262
262 26.6
26.6 || 26.55
26.55 27191.38
27191.38 9.87
9.87
33 -- 270
270 27.4
27.4 || 27.37
27.37 28029.12
28029.12 9.86
9.86
44 -- 264
264 26.7
26.7 || 26.67
26.67 27308.75
27308.75 9.90
9.90
55 -- 269
269 27.2
27.2 || 27.23
27.23 27884.30
27884.30 9.88
9.88
TOTALS
TOTALS 1334
1334 135.1
135.1 || 135.09
135.09 Rand
Rand procs=
procs= 55 read=100%
read=100% bs=1024KB
bs=1024KB

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-24. Example Using ndisk . QV5721.0

Notes:
This example tests the read throughput for random I/O. The flags used are:
-f <file> use <file> for disk I/O (can be a file or raw device)
-R Random disk I/O test (file or raw device)
-r <read%> Read percent min=0,max=100
-b <size> Block size, use with K, M or G (default 4KB)
-t <secs> Timed duration of the test in seconds (default 5)
-s <size> File size, use with K, M or G (mandatory for raw device)
-M <num> Multiple processes used to generate I/O

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-25


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Tuning FC Disk and Adapter Queues


Use caution changing queue parameters
Use the maximum I/Os to each device for tuning num_cmd_elems
When increasing the queue_depth, I/O service times and
throughput typically increase
Watch that I/O service times do not reach the disk timeout value
General rule is to increase the queue_depth until I/O service times
start exceeding 15 ms for small random reads or writes or the
queues are not filling
Tests for queue_depth
Use your application
Use a test tool like ndisk

Caches affect the I/O service time and testing results

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-25. Tuning FC Disk and Adapter Queues QV5721.0

Notes:
Be careful increasing the queue_depth values. You can overload the disk subsystem or
cause device configuration problems at boot. An increase in queue depths allows more
I/Os to be sent to the disk subsystem. This will probably cause I/O service times to
increase, but throughput will also increase. If I/O service times start approaching the disk
timeout value, then you're submitting more I/Os than the disk subsystem can handle. You
will see I/O timeouts and errors in the error log indicating problems completing I/Os.
When testing for the appropriate value for the queue depths, its best to have the actual
application(s) running. When that is not possible, use a tool like ndisk in the nstress
package.
Read and write caches affect your I/O service times and testing results. If the read cache
already has the data from an earlier test run, the I/O service times will be faster and will
affect repeatability of the results. Write cache helps performance until, and if, the write
caches fill up at which time performance goes down, so longer running tests with high write
rates can show a drop in performance over time.

5-26 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Checkpoint
1. True/False: I/Os are queued at several layers in the I/O
stack.
2. True/False: The queue_depth can be changed while the
volume group is varied on.
3. True/False: If the queue statistics in iostat D are all zero,
the number of I/O operations are not being limited by
queue_depth. so there is no need to adjust queue_depth.
4. True/False: When a physical disk is used as backing storage
on a VIOS, make the virtual SCSI queue_depth equal to
the physical volume queue_depth.
5. True/False: The parameter lg_term_dma specifies the
queue size for the adapter.

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-26. Checkpoint QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-27


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Exercise 5: SAN Performance


In this exercise, you will:
Determine the queue_depth of various disks
Change the queue_depth on disks and use ndisk to compare the
results
Compare block size and access types

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-27. Exercise 5: SAN Performance QV5721.0

Notes:

5-28 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Unit Summary
I/Os are queued to improve throughput.
I/Os are queued at several layers in the I/O stack.
The disk drive queue depth is the maximum number of requests the
disk can hold in its queue.
The iostat D command shows an extended disk utilization report,
including disk queue statistics.
Adjusting queue depth may improve disk performance, but it is
dependent on the workload.
The fcstat command displays statistics gathered by a specified
Fibre Channel device, and can show when num_cmd_elems and/or
max_xfer_size need to be increased.
Be careful increasing the queue_depth values. You can overload
the disk subsystem or cause device configuration problems at boot.

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 5-28. Unit Summary QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 5. SAN Performance Monitoring 5-29


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

5-30 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty Unit 6. Problem Determination

What this unit is about


This unit describes the FC problem determination process. It also
discusses the AIX parameters involved in FC path selection, health
checking, disk reservation, Fast I/O failure and Dynamic Tracking.
This unit is not "fibre only". Instead it gives the opportunity to perform
general AIX steps against FC devices side by side to vSCSI devices
(and in the case of VIO, direct-attached devices).

What you should be able to do


After completing this unit, you should be able to:
Describe the FC problem determination process
Identify the FC path selection algorithms
Use the iostat m command to display path priorities
List the attributes that are used for FC health checking
Define the FC reservation policies that can be used
Explain the purpose of fast I/O failure and dynamic tracking

How you will check your progress


Accountability:
Review Questions
Machine exercises

References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
SG24-6050 Practical Guide for SAN with pSeries Redbook
SC23-6730 AIX Operating System and Device Management

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit Objectives
After completing this unit, you should be able to:
Describe the FC problem determination process
Identify the FC path selection algorithms
Use the iostat m command to display path priorities
List the attributes that are used for FC health checking
Define the FC reservation policies that can be used
Explain the purpose of fast I/O failure and dynamic tracking

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-1. Unit Objectives QV5721.0

Notes:

6-2 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Fibre Channel Problem Determination Process


Troubleshooting starts with an accurate description of the problem
Some questions to ask and answer are:
What exactly is the problem?
Is this FC SAN environment a new installation?
What has been changed recently?
What is the scope of the problem?
Are any specific protocols, functions, or applications related to the
problem?
Is the problem constant or intermittent?
Can the problem be duplicated?
Do any actions reduce the problem symptoms or stop it entirely?
Is the problem more prevalent at a given time of day or load
condition?

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-2. Fibre Channel Problem Determination Process QV5721.0

Notes:
Define what aspect or portion of the FC SAN is not working correctly. What action, function,
or application does not work as intended?
If it is a new FC SAN installation, then configuration issues are typically the cause of the
problem. Otherwise, some form of connectivity issue is likely.
Problems in an established FC SAN are typically the result of a change to one or more
devices, or the addition or removal of a device.
Determine the scope of the problem. Is the problem observed on other servers? Can the
server connect with other storage devices? Are there any common devices within the
scope of the problem?
Constantly occurring problems are much easier to isolate. Intermittent issues typically
involve more sophisticated troubleshooting techniques.
Actions that improve a faulty situation are additional clues about the source of the problem.
A complete description of these actions can be a valuable resource of information for the
problem determination process.

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Before a Problem Occurs


Ensure you have:
FC SAN network diagrams
Current code levels with release notes
Current configurations of FC SAN components
User manuals for the devices used in the FC SAN
Recent baseline performance data
Consistent clock settings on FC SAN components

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-3. Before a Problem Occurs QV5721.0

Notes:
Effective problem determination starts with a good understanding of the SAN and the
components involved.
SAN diagrams should document both the logical and the physical connections across the
entire SAN.
Many SAN devices, particularly SAN fabric switches, have the means to download, and
upload, configurations to a server over a LAN connection.
Many problems are introduced and/or resolved with different levels of code and PTFs on
specific devices.
Performance-related issues need to have a reference to establish the degree of impact of
the problem.
With data being collected from a multitude of sources, it is important to be able to correlate
the times of the different pieces of information.
User manuals (installation, configuration, and service guides) for each type of device within
the SAN provide points of reference for commands.

6-4 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

When a Problem Occurs


Problem determination starts before a problem
(warning/error messages preceding the problem)
Tools to gather data are varied depending on the problem,
but generally will include the following:
User input
Visual indicators (LEDs, power indicators, etc)
Console commands and management interfaces from affected
devices
Error logs
Internal traces from affected devices
Protocol analyzers and external trace tools
FC SAN management applications

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-4. When a Problem Occurs QV5721.0

Notes:
Usually, problems in the SAN environment are observed by end users.
A number of SAN products provide indications that help quickly pinpoint a hardware
problem.
Typically, console commands, along with using a products GUI management tool, are the
most useful tools for troubleshooting.
Many SAN devices maintain an internal error log that can be viewed as a powerful
resource during problem determination.
Some SAN devices are capable of generating internal traces of events and certain types of
Fibre Channel traffic.
Fibre Channel protocol analyzers and trace tools are very expensive. Fortunately, many
problems can be resolved without these tools. For performance issues, intermittent
problems, and connectivity faults, these are the preferred tools.
Many FC SAN management applications communicate over a LAN connection and, thus,
have no impact on Fibre Channel traffic.

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Problem Analysis
Define the problem to be resolved
Specify the issue
Determine potential causes (distinctions and/or changes)
Test each cause against the specifications
Find the most probable cause
Verify, observe, experiment, fix and monitor

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-5. Problem Analysis QV5721.0

Notes:

6-6 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Disk and Path Attributes


## lsattr
lsattr -El
-El hdisk4
hdisk4
...
...
algorithm
algorithm fail_over
fail_over Algorithm
Algorithm True
True
...
...
hcheck_cmd
hcheck_cmd test_unit_rdy
test_unit_rdy Health
Health Check
Check Command
Command True
True
hcheck_interval
hcheck_interval 60 60 Health
Health Check
Check Interval
Interval True
True
hcheck_mode
hcheck_mode nonactive
nonactive Health
Health Check
Check Mode
Mode True
True
...
...
reserve_policy
reserve_policy single_path
single_path Reserve
Reserve Policy
Policy True
True

## lspath
lspath -l
-l hdisk4
hdisk4 -F'status
-F'status path_id
path_id parent
parent connection'
connection'
Enabled
Enabled 00 fscsi0
fscsi0 500507630e01fc30,4011405200000000
500507630e01fc30,4011405200000000
Enabled
Enabled 11 fscsi0
fscsi0 500507630e03fc30,4011405200000000
500507630e03fc30,4011405200000000
Enabled
Enabled 22 fscsi1
fscsi1 500507630e81fc30,4011405200000000
500507630e81fc30,4011405200000000
Enabled
Enabled 33 fscsi1
fscsi1 500507630e83fc30,4011405200000000
500507630e83fc30,4011405200000000

## lspath
lspath -AHE
-AHE -l
-l hdisk4
hdisk4 -p
-p fscsi0
fscsi0 -i
-i 00
attribute
attribute value
value description
description user_settable
user_settable

scsi_id
scsi_id 0x40a00
0x40a00 SCSI
SCSI ID
ID False
False
node_name
node_name 0x500507630efffc30
0x500507630efffc30 FC
FC Node
Node Name
Name False
False
priority
priority 11 Priority
Priority True
True

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-6. Disk and Path Attributes QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Path Selection Algorithms


The disk attribute, algorithm, determines which path is
chosen
Different policies are available depending on the multipath
driver
fail_over all I/Os are sent over the most preferred path until a
failure is detected; then all I/Os are sent over a less preferred path
round_robin path is chosen randomly from the paths that werent
used for the last operation; alternating with only two paths
load_balance all I/Os are sent over all the path. If a failure is
detected, then the path is not used any more

Changing the algorithm


Use the chdev El disk a algorithm=algorithm-to-use
Disk cannot be in use
Some algorithms are not compatible with some reservation policies
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-7. Path Selection Algorithms QV5721.0

Notes:
The path selection policies are:
fail_over - all I/O operations for the device are sent to the same (preferred) path until
the path fails because of I/O errors. Then an alternate path is chosen for subsequent
I/O operations.
round_robin - the path to use for each I/O operation is chosen at random from those
paths that were not used for the last I/O operation.
load_balance - the path to use for an I/O operation is chosen by estimating the load on
the adapter to which each path is attached. The load is a function of the number of I/O
operations currently in process. If multiple paths have the same load, a path is chosen
at random from those paths. Load-balancing mode also incorporates failover protection.
Dynamic load balancing between multiple paths when there is more than one path from
a host server to the DS. This may eliminate I/O bottlenecks that occur when many I/O
operations are directed to common devices via the same I/O path, thus improving the
I/O performance.

6-8 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Path Priority
The default path priority is 1 (for each path)

## lspath
lspath -AHE
-AHE -l
-l hdisk4
hdisk4 -p
-p fscsi0
fscsi0 -i
-i 00
attribute value
attribute value description
description user_settable
user_settable

scsi_id
scsi_id 0x40a00
0x40a00 SCSI
SCSI ID
ID False
False
node_name
node_name 0x500507630efffc30
0x500507630efffc30 FC
FC Node
Node Name
Name False
False
priority
priority 11 Priority
Priority True
True

Changing the path priority

## chpath
chpath -l
-l hdisk4
hdisk4 -i
-i 33 -a
-a priority=2
priority=2
path
path Changed
Changed

Changing the path priority has an effect using fail over


algorithm
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-8. Path Priority QV5721.0

Notes:
Path priority modifies the behavior of the algorithm methodology on the list of paths. When
the algorithm attribute value is fail_over, the paths are kept in a list. The sequence in this
list determines which path is selected first and, if a path fails, which path is selected next.
The sequence is determined by the value of the path priority attribute. A priority of 1 is
the highest priority. Multiple paths can have the same priority value, but if all paths have the
same value, selection is based on when each path was configured.
When the algorithm attribute value is round_robin, the sequence is determined by percent
of I/O. The path priority value determines the percentage of the I/O that should be
processed down each path. I/O is distributed across the enabled paths. A path is selected
until it meets its required percentage. The algorithm then marks that path failed or disabled
to keep the distribution of I/O requests based on the path priority value.

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The iostat Path Utilization Report


The iostat m flag displays path information for disks

## iostat
iostat -dm
-dm hdisk4
hdisk4 33 11

System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 paths=27
paths=27 vdisks=7
vdisks=7

Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk4
hdisk4 80.0
80.0 3385.4
3385.4 6771.1
6771.1 10080
10080 00

Paths:
Paths: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
Path3
Path3 0.0
0.0 0.0
0.0 0.0
0.0 00 00
Path2
Path2 77.0
77.0 3385.4
3385.4 6771.1
6771.1 10080
10080 00
Path1
Path1 0.0
0.0 0.0
0.0 0.0
0.0 00 00
Path0
Path0 0.0
0.0 0.0
0.0 0.0
0.0 00 00

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-9. The iostat Path Utilization Report QV5721.0

Notes:

6-10 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Health Checking
AIX PCM has a health check capability that can be used to:
Check the paths and determine which ones can currently be used to send I/O
Enable a path that was previously marked failed because of a temporary path
fault
Check currently unused paths that would be used if a failover occurred
hcheck_mode attribute

The hcheck_mode determines which path should be checked


when the health check capability is used
The default value is nonactive and sends the healthcheck command down
paths that have no active I/O, including paths with a state of failed.
The path failure will be detected automatically

hcheck_interval attribute
Defines how often the health check is performed on the paths for a device.
# chdev -l hdisk0 -a hcheck_interval=60 P

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-10. Health Checking QV5721.0

Notes:
Health checking (hcheck_mode) supports the following modes of operations:
nonactive: When this value is selected, the healthcheck command will be sent to
paths that have no active I/O, including paths that are opened or in failed state, which is
the default setting for MPIO devices
enabled: When this value is selected, the healthcheck command will be sent to paths
that are opened with a normal path mode
failed: When this value is selected, the healthcheck command is sent to paths that
are in failed state
The hcheck_interval attribute defines how often the health check is performed on the
paths for a device. The attribute supports a range from 0 to 3600 seconds. When a value of
0 is selected, health checking is disabled.

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

FC Path Management: Status and Recovery


Listing and checking the paths status
# lspath
...
Enabled hdisk1 fscsi0
Failed hdisk1 fscsi1
...

Once the path is fixed


If hcheck_mode is set to nonactive and hcheck_interval is not 0, then the
path should automatically change to enabled
Otherwise, the path will remain in a failed state until you execute the commands:
# chpath -s disable -l hdisk1 -p fscsi1
# chpath -s enable -l hdisk1 -p fscsi1

Checking the status again


# lspath
...
Enabled hdisk1 fscsi0
Enabled hdisk1 fscsi1
...

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-11. FC Path Management: Status and Recovery QV5721.0

Notes:
After a path failure, the path shows up in the failed mode even after the path is up again.
Unless the hcheck_mode and hcheck_interval attributes are set, the state will continue to
show Failed even after the disk has recovered. To have the state updated automatically,
type chdev -l hdiskx -a hcheck_interval=60.

6-12 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Disk Reservation
Purpose: Prevent data corruption in a multihost environment
Occurs on SCSI command level with all types of drives
varyonvg (without any options) places SCSI reserve on disks/LUNs in volume
group
varyoffvg releases the reserve

Problem: Reserves disable any kind of access to disk/LUN in a


shared connectivity environment
All sharing host's cfgmgr do not read PVID
All sharing host's importvg cannot read the VGDA or LVCB

The reserve_policy defines whether a reservation methodology


is employed when the device is opened. Possible values (depending
on the type of disk)
no_reserve
single_path
PR_exclusive
PR_shared

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-12. Disk Reservation QV5721.0

Notes:
Reservation policies:
no_reserve does not apply a reservation methodology for the device. The device might
be accessed by other initiators, and these initiators might be on other host systems
single-path policy applies a SCSI2 reserve methodology for the device, which means
the device can be accessed only by the initiator that issued the reserve.
PR_exclusive policy applies a SCSI3 persistent-reserve, exclusive-host methodology
when the device is opened.
PR_shared policy applies a SCSI3 persistent-reserve, shared-host methodology when
the device is opened.

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Detecting a SCSI Reserve Problem


How do you clear or break a SCSI reserve?
Each storage vendor provides a method to break either a SCSI-2
Reserve or SCSI-3 Persistent Reserve
CAUTION: Clearing SCSI reservation could lead to data corruption.
Thus, it should be done with the assistance of the storage provider.
AIX or PCM logs an error entry if it detects a SCSI reservation
conflict
Logged usually as a SC_DISK_ERR* or FSCSI_ERR* entry
Error detail hidden in the SCSI sense data so send to IBM
Support for assistance
SCSI conflict errors are usually followed by LVM and J2 errors
Look for labels LVM_IO_FAIL, J2_LOG_EIO and
J2_METADATA_EIO

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-13. Detecting a SCSI Reserve Problem QV5721.0

Notes:
Refer to the storage vendors documentation on how to clear or break a reservation.
Another reference is the IBM System Storage Multipath Subsystem Device Driver Users
Guide (SC30-4131).

6-14 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Switch Attach Fibre Channel Adapter Attributes


fscsi attributes
fc_err_recov
dyntrk

## lsattr
lsattr -El
-El fscsi0
fscsi0
attach
attach switch
switch How
How this
this adapter
adapter is
is CONNECTED
CONNECTED False
False
dyntrk
dyntrk yes
yes Dynamic
Dynamic Tracking
Tracking of
of FC
FC Devices
Devices True
True
fc_err_recov
fc_err_recov fast_fail
fast_fail FC
FC Fabric
Fabric Event
Event Error
Error RECOVERY
RECOVERY Policy
Policy True
True
scsi_id
scsi_id 0x30100
0x30100 Adapter
Adapter SCSI
SCSI ID
ID False
False
sw_fc_class
sw_fc_class 33 FC
FC Class
Class for
for Fabric
Fabric True
True

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-14. Switch Attach Fibre Channel Adapter Attributes QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Fast I/O Failure


Fast failure of I/O is controlled by the fscsi device attribute
fc_err_recov
delayed_fail - Wait period of time to fail I/Os
fast_fail - Fail I/Os once link loss is detected

To change
# chdev -l fscsi0 -a fc_err_recov=fast_fail

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-15. Fast I/O Failure QV5721.0

Notes:
AIX supports fast I/O failure for fibre channel devices after link events in a switched
environment. Fast failure of I/O is controlled by the fscsi device attribute, fc_err_recov.
fast_fail - If the fibre channel adapter driver detects that a link was lost between the
storage device and the switch, it waits a about 15 seconds to allow the fabric to stabilize. At
that point, if the FC adapter driver detects that the device is not on the fabric, it begins
failing all I/Os at the adapter driver. Any new I/O or future retries of the failed I/Os are failed
immediately by the adapter until the adapter driver detects that the device has rejoined the
fabric. Fast I/O failure can be useful in multipath configurations. It can decrease the I/O fail
times due to the loss of a link between the storage device and the switch, and can allow
faster failover to alternate paths.
delayed_fail - I/O failure proceeds as normal; retries are not immediately failed, and
failover takes longer than it does if fast_fail is specified.

6-16 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Dynamic Tracking of Fibre Channel Devices


Dynamic tracking of FC devices is controlled by the fscsi device
attribute dyntrk
yes - Enables dynamic tracking
no Disables dynamic tracking
To change:
# chdev -l fscsi0 -a dyntrk=yes
The device must be in a Defined state before you change the the
dyntrk setting

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-16. Dynamic Tracking of Fibre Channel Devices QV5721.0

Notes:
You can dynamically track fibre channel devices, which allows the dynamic movement of a
fibre channel path between the fabric switch and the storage subsystem by suspending I/O
for 15 seconds while the move occurs.
If dynamic tracking of FC devices is enabled, the FC adapter driver detects when the Fibre
Channel node port ID of a device changes. The FC adapter driver then reroutes traffic
destined for that device to the new Worldwide Port Name (WWPN) while the devices are
still online.
If dynamic tracking is not enabled, you must take the devices offline before you move a
cable from one port to another. Otherwise, failover occurs.
You can use dynamic tracking only in a SAN environment. You cannot use it in a
direct-attach environment.

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

FCP_ERR10 Error
The FCP_ERR10 error indicates that dynamic tracking or fast fail is enabled, but
the adapter firmware or the SAN configuration does not support these features

LABEL:
LABEL: FCP_ERR10
FCP_ERR10
IDENTIFIER:
IDENTIFIER: 5A7598C3
5A7598C3
Date/Time:
Date/Time: Fri
Fri May
May 66 22:14:22
22:14:22 CDT
CDT 2011
2011
Sequence
Sequence Number:
Number: 1644
1644
Machine
Machine Id:
Id: 0006D0A2D900
0006D0A2D900
Node
Node Id:
Id: usap03
usap03
Class:
Class: OO
Type:
Type: INFO
INFO
WPAR:
WPAR: Global
Global
Resource
Resource Name:
Name: fscsi7
fscsi7
Description
Description
Additional
Additional FCFC SCSI
SCSI Protocol
Protocol Driver
Driver Information
Information
Recommended
Recommended Actions
Actions
WAIT
WAIT FOR
FOR ADDITIONAL
ADDITIONAL MESSAGE
MESSAGE BEFORE
BEFORE TAKING
TAKING ACTION
ACTION
Detail
Detail Data
Data
SENSE
SENSE DATA
DATA
0000
0000 0010
0010 0000
0000 00D9
00D9 0000
0000 0000
0000 0302
0302 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0001
0001 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
...
...

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-17. FCP_ERR10 Error QV5721.0

Notes:
FCP_ERR10 indicates that dynamic tracking or fast fail is enabled, but the adapter
firmware or the SAN configuration does not support these features.
The current settings on the system:
# lsattr -El fscsi7
attach al How this adapter is CONNECTED False
dyntrk yes Dynamic Tracking of FC Devices True
fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True
The solution is to disable dynamic tracking and fast fail
# chdev -l fscsi7 -attr fc_err_recov=delayed_fail dyntrk=no
The changes will require a system reboot.

6-18 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Checkpoint
1. What disk attribute determines which path is chosen?
_______________________________________
2. What is the default MPIO path priority value? ______
3. Which parameter needs to be changed for the MPIO
paths to be automatically updated when a path shuts
down and comes back? __________________________
4. What is the purpose of disk reservation?
____________________________________________
5. True/False: Fast fail and dynamic recovery only works in
a switch attached Fibre Channel.

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-18. Checkpoint QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Exercise 6: Problem Determination


In this exercise, you will work with:
Algorithms and path priorities
Reserve policies

UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-19. Exercise 6: Problem Determination QV5721.0

Notes:

6-20 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

Uempty

Unit Summary
Troubleshooting starts with an accurate description of the problem.
The algorithms that define which path is chosen may include
fail_over, round_robin and load_balance. Which are
available depend on the type of disk.
Path priority modifies the behavior of the algorithm methodology on
the list of paths.
The iostat m flag displays path information for disks.
The health check capability checks the paths to determine which
ones can be used to send I/O.
The reserve_policy can be used to allow one or any number of
initiators to access the disk.
The fast failure attribute fc_err_recov determines whether or not
to wait a period of time to fail I/Os once a link is lost
FC devices can be dynamically tracked, allowing the dynamic
movement of a FC path between the fabric switch and the storage
subsystem
UNIX Software Service Enablement Copyright IBM Corporation 2011

Figure 6-20. Unit Summary QV5721.0

Notes:

Copyright IBM Corp. 2011 Unit 6. Problem Determination 6-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

6-22 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

AP Appendix A. Checkpoint solutions


Unit 1

Checkpoint Solutions
1. IBM offers a network storage solution via the N Series
product family.
2. The storage servers in our lab support the following top
speeds:
DS4300 2 Gb
DS6800 2 Gb
n3400 4 Gb
3. To find a LUN value, use the lscfg command .
4. A quick way to show device information is via the
lsdev command.
5. The lsattr command can be used to identify device
attributes.

UNIX Software Service Enablement Copyright IBM Corporation 2011

Copyright IBM Corp. 2011 Appendix A. Checkpoint solutions A-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 2

Checkpoint Solutions
1. What two ways can AIX be installed on a SAN disk?
Installing directly to the SAN disk
Copying an existing rootvg to a SAN disk

2. To select a SAN disk to boot from, utilize the SMS menu


system to select a boot from either:
SAN
List all devices

3. True/False: There can be more than one rootvg source


from SAN.
Firmware (SMS) will find any disk that has a valid boot
record.

UNIX Software Service Enablement Copyright IBM Corporation 2011

A-2 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

AP Unit 3

Checkpoint Solutions
1. The VIO server command cfgdev will configure attached
devices and make them available.
2. To list all configured disks under the VIO server, use the
command lsdev type disk.
3. True/False: To use NPIV, you must use an 8 Gb Fibre
Channel HBA assigned to the VIO server.
4. When you use NPIV, the virtual Fibre Channel HBA is
provided a unique WWPN from the Hypervisor
5. To view available NPIV ports, use the lsnports
command

UNIX Software Service Enablement Copyright IBM Corporation 2011

Copyright IBM Corp. 2011 Appendix A. Checkpoint solutions A-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 4

Checkpoint Solutions
1. True/False: You need to install an expansion card for Fibre
Channel connectivity to a Blade Server.
2. What two I/O modules options are available to provide
Fibre Channel access to a Blade Server?
Switch
Pass-thru
3. Under IVM, what LPAR owns the physical adapter when
creating virtual Fibre Channel devices? 1
4. True/False: You can see virtual disks (LUNs) that are
assigned to a LPAR via the IVM GUI.
False You can only see the virtual FCS device.

UNIX Software Service Enablement Copyright IBM Corporation 2011

A-4 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3
Student Notebook

AP Unit 5

Checkpoint Solutions
1.
1. True/False:
True/False: I/Os
I/Os are
are queued
queued at
at several
several layers
layers in
in the
the I/O
I/O
stack.
stack.
2.
2. True/False:
True/False: The queue_depth can
The queue_depth can be
be changed
changed while
while the
the
volume
volume group
group is
is varied
varied on.
on.
3. True/False: IfIf the
3. True/False: the queue
queue statistics
statistics in
in iostat
iostat D D are
are all
all zero,
zero,
the
the number
number ofof I/OI/O operations
operations are are not
not being
being limited
limited by
by
queue_depth. so
queue_depth. so there
there is
is no
no need
need toto adjust
adjust queue_depth.
queue_depth.
4. True/False: When
4. True/False: When aa physical
physical disk
disk is
is used
used as
as backing
backing storage
storage
on
on aa VIOS,
VIOS, make
make the
the virtual
virtual SCSI queue_depth equal
SCSI queue_depth equal to
to
the
the physical
physical volume
volume queue_depth.
queue_depth.
5. True/False: The
5. True/False: The parameter lg_term_dma specifies
parameter lg_term_dma specifies the
the
queue
queue sizesize for
for the
the adapter.
adapter.
Its
Its typically
typically called
called num_cmd_elems.
num_cmd_elems.
UNIX Software Service Enablement Copyright IBM Corporation 2011

Copyright IBM Corp. 2011 Appendix A. Checkpoint solutions A-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Unit 6

Checkpoint Solutions
1. What disk attribute determines which path is chosen?
algorithm
2. What is the default MPIO path priority value? 1
3. Which parameter needs to be changed for the MPIO
paths to be automatically updated when a path shuts
down and comes back?
hcheck_interval needs to be non-zero
4. What is the purpose of disk reservation?
To prevent access from another host to the same disk.
5. True/False: Fast fail and dynamic recovery only works in
a switch attached Fibre Channel.

UNIX Software Service Enablement Copyright IBM Corporation 2011

A-6 SANPD Copyright IBM Corp. 2011


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V5.4.0.3

backpg
Back page

You might also like