Professional Documents
Culture Documents
cover
Front cover
Student Notebook
ERC 1.0
Trademarks
IBM and the IBM logo are registered trademarks of International Business Machines
Corporation.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
AIX BladeCenter DB2
DS4000 DS6000 DS8000
FlashCopy Power Systems POWER
PowerHA PowerVM pSeries
Redbooks System p System Storage
xSeries
Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the
United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows is a trademark of Microsoft Corporation in the United States, other countries, or
both.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Other product and service names might be trademarks of IBM or other companies.
TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Course description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM and the IBM logo are registered trademarks of International Business Machines
Corporation.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
AIX BladeCenter DB2
DS4000 DS6000 DS8000
FlashCopy Power Systems POWER
PowerHA PowerVM pSeries
Redbooks System p System Storage
xSeries
Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the
United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Windows is a trademark of Microsoft Corporation in the United States, other countries, or
both.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Other product and service names might be trademarks of IBM or other companies.
Purpose
This course is designed to provide enhanced knowledge of various
SAN-related configurations and SAN management practices with AIX
based Power Systems. Hands-on exercises will reinforce the lecture
and give students the practice configuring, managing, and performing
common operations relating to Fibre Channel based storage
subsystems, Virtual I/O Server configuration, SAN boot
considerations, and IBM Power Blades. The lab environment will
consist of Power Systems (such as model p6 520 servers), IBM Fibre
Channel Switches, and IBM storage servers (such as DS4000,
DS6000, and N series n3400).
Audience
The audiences for this training include AIX technical support
individuals, AIX developers, AIX system administrators, system
architects and engineers, product engineers, and post-sales support
teams.
Prerequisites
Students attending this course are expected to have knowledge of AIX
SAN operations. These skills can be obtained by attending the
following course:
AHQV334 - PowerVM Virtual I/O Server Configuration
AHQV571 - Fibre Channel Storage for AIX on Power Systems
Objectives
After completing this course, you should be able to:
Describe common interaction of AIX within a Fibre Channel (FC)
environment
Interpret AIX boot strategies using SAN
Select key FC related performance characteristics to monitor
Use AIX commands for performance monitoring
pref Agenda
Block 1
Welcome
Unit 1 - SAN Overview
Exercise 1 - Introduction to Lab Environment
Unit 2 - SAN Boot
Exercise 2 - SAN Boot Operations
Block 2
Unit 3 - BladeCenter SAN
Exercise 3 - BladeCenter Demonstration
Unit 4 - Virtual I/O Server
Exercise 4 - Virtual I/O Server Operations
Block 3
Unit 5 - SAN Performance Monitoring
Exercise 5 - SAN Performance Monitoring
Block 4
Unit 6 - Problem Determination
Exercise 6 - Problem Determination
Text highlighting
The following text highlighting conventions are used throughout this book:
Bold Identifies file names, file paths, directories, user names,
principals, menu paths and menu selections. Also identifies
graphical objects such as buttons, labels and icons that the
user selects.
Italics Identifies links to web sites, publication titles, is used where the
word or phrase is meant to stand out from the surrounding text,
and identifies parameters whose actual names or values are to
be supplied by the user.
Monospace Identifies attributes, variables, file listings, SMIT menus, code
examples and command output that you would see displayed
on a terminal, and messages from the system.
Monospace bold Identifies commands, subroutines, daemons, and text the user
would type.
References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
http://www.snia.org
Storage Network Industry Association (SNIA)
http://www.fiberchannel.org
Fibre Channel Industry Association (FCIA)
REDP-4517 Harnessing the SAN to Create a Smarter
Infrastructure Redbook
SG24-6050 Practical Guide for SAN with pSeries Redbook
SG24-5470 Introduction to Storage Area Networks Redbook
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Uempty
Unit Objectives
After completing this unit, you should be able to:
Describe IBM SAN product offerings
Discuss basic aspects of SAN interaction with Power systems
Describe course lab environment
Use AIX commands to identify system resources
Notes:
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
Uempty
Notes:
Our discussion throughout this course will deal with Storage Area Network (SAN) devices,
specifically disk devices and how they attach to AIX. We will look at IBM product offerings,
and see how AIX interacts with these storage servers. SAN is not a new concept, it has
been available for many years. What is changing over time is the cost of running a SAN
continues to fall, making it available to many more customers. When you add the
virtualization capabilities of Power systems, SAN becomes a valuable asset to just about
any AIX installation.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
AIX
Local
bus 10 Gb
Ethernet
FCoE iSCSI
Enhanced
NAS
Ethernet
Gateway
SCSI, SATA, SAS, USB Appliance
iSCSI
Ethernet
NAS NAS
Server Appliance
Notes:
Where once we only worried about locally attached storage devices, todays data center
can be a complex inter-connected web of devices. While AIX will recognize each storage
device as an hdisk, the actual storage device can be connected in a number of ways. The
visual above summarizes how Power systems access storage resources.
Our discussion will focus on the top portion of this visual, as we look at AIX and VIO server
configurations with a SAN.
Uempty
Notes:
If you have multiple AIX instances installed, booting from a SAN device can provide
flexibility, and reliability. You gain flexibility by enabling migration or duplication of the AIX
image throughout the SAN. Reliability is attained via the RAID setting of the storage server.
We will discuss SAN boot in unit 2.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Pa Pb Pc
Pc
Pb
Pa
Hypervisor
NPIV Capable Fibre Channel HBA
Hardware
Notes:
As we mentioned previously, SAN becomes an important factor when we consider
virtualization of Power systems. This may include the VIO server owning the Fibre Channel
attached devices, or the virtualization of the Fibre Channel HBA. In either case, we need to
better understand how the VIO server interacts with Fibre Channel devices.
We will discuss the VIO server in unit 3.
Uempty
Power Blade
Power Blade
Power Blade
and virtual Fibre Channel
Notes:
IBM BladeCenter provides an excellent platform for running AIX, as well as many other
operating systems. Management of IBM BladeCenter resources, while not complex, does
provide a few differences as compared to a Power systems server.
We will discuss IBM BladeCenter in unit 4.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
We will discuss AIX SAN performance issues in unit 5.
Uempty
Notes:
We will discuss AIX problem determination steps in unit 6.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Entry level
systems
DS3000 family
Network storage
appliances
N Series
Notes:
IBM offers a wide variety of storage products, from small scale entry level systems to
enterprise-class large scale products. We will have an opportunity to work with some of
these during our exercises. For a current listing of products, please refer to
http://www.ibm.com/systems/storage/disk/.
Uempty
VIOS AIX
hdisk0 hdisk1
Software
hdisk0 hdisk1
Hardware
LUN LUN LUN
hdisk2 hdisk3 hdisk4 hdis5
Notes:
This visual represents our lab environment. As you can see, we have configured a number
of LUNs to each of your lab systems, so you will have an opportunity to interact with a
number of technologies, and configuration options.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
Between our Power system and storage server is the network fabric. When a storage fabric
is configured, there are two basic structures that can be followed, either a physical or
logical separation. A logical separation provides for the most flexibility, as all switches
interconnect, and you can adjust paths in a number of different ways. A physical
separation, as we use in our lab, does limit flexibility, but does give full separation between
paths. In either case, you need to have solid documentation, and make sure to label your
cables!
Uempty
Heritage:
Manufactured by LSI
Architecture:
2 Gb bus topology
Common management interface with DS3000 and DS5000
Notes:
The first of our storage servers is also the oldest. These servers were very popular 10
years ago, providing customers a modular approach to building s SAN. The base unit
contains 14 disks, and can be attached to multiple expansion units, as demands change.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Heritage:
Designed and manufactured by IBM
Architecture:
2 Gb switch topology
Notes:
Our next storage server is our only device designed and manufactured by IBM. The
DS6000 was intended as an enterprise class storage server that could also attach to open
systems, which includes AIX. A customer that might be running AIX along side a
mainframe could look to the DS6000 as a platform that could attach to both systems. It is
also designed in a modular fashion, enabling a customer to purchase a base unit, and
attach additional expansion modules as needed.
Uempty
Heritage:
Manufactured by NetApps
Architecture:
4 Gb switch topology
Notes:
Our last storage server is the most current in our lab. Introduced in 2010, the n3400 fits into
the entry-level size for N Series storage servers. The N Series product family provides for
Fibre Channel attachment as well as Ethernet attachment, and allows for a customer to
grow an installation to very large capacity without the need to learn a new management
platform.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
Uempty
Notes:
While there are some special commands available for specific storage server products,
most of your work will involve standard AIX commands. Using common commands such as
lsdev or lscfg, you can gather critical information about devices backed by SAN storage
servers.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
## lsdev
lsdev -C
-C
.. .. .. All known devices appear
fcnet0
fcnet0 Defined
Defined 00-08-01
00-08-01 Fibre
Fibre Channel
Channel Network
Network Protocol
Protocol Device
Device
fcnet1
fcnet1 Defined
Defined 00-09-01
00-09-01 Fibre
Fibre Channel Network Protocol Device
Channel Network Protocol Device
fcs0
fcs0 Available
Available 00-08
00-08 FC
FC Adapter
Adapter
fcs1
fcs1 Available
Available 00-09
00-09 FC
FC Adapter
Adapter
fcs2
fcs2 Available
Available C5-T1
C5-T1 Virtual
Virtual Fibre
Fibre Channel
Channel Client
Client Adapter
Adapter
fscsi0
fscsi0 Available
Available 00-08-02
00-08-02 FC
FC SCSI
SCSI I/O
I/O Controller
Controller Protocol
Protocol Device
Device
fscsi1
fscsi1 Available
Available 00-09-02
00-09-02 FC
FC SCSI
SCSI I/O
I/O Controller
Controller Protocol
Protocol Device
Device
fscsi2
fscsi2 Available
Available C5-T1-01
C5-T1-01 FC
FC SCSI
SCSI I/O
I/O Controller
Controller Protocol
Protocol
.. .. ..
Notes:
The lsdev command can be utilized to display the available ports. In the visual, there are
three ports, fcs0, fcs1, and fcs2. The first two HBAs are physical adapters, while fcs2 is a
virtual adapter. You can also see that while the physical adapters have a fcnet device
configured to them, the virtual adapter does not.
Uempty
Notes:
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
## lscfg
lscfg -vl
-vl fcs0
fcs0 |grep
|grep Net
Net
Identify WWPN of port)
Network
Network Address.............10000000C96704E4
Address.............10000000C96704E4
## lscfg
lscfg -vl
-vl hdisk5
hdisk5 List detailed information about a device
hdisk5
hdisk5 U789C.001.DQDC383-P1-C4-T1-W500507630E01FC30-
U789C.001.DQDC383-P1-C4-T1-W500507630E01FC30-
L4011406200000000
L4011406200000000 MPIO
MPIO Other
Other FC
FC SCSI
SCSI Disk
Disk Drive
Drive
Manufacturer................IBM
Manufacturer................IBM
Machine
Machine Type
Type and
and Model......1750500
Model......1750500 DS6000-backed device
.. .. ..
## getconf
getconf DISK_SIZE
DISK_SIZE /dev/hdisk5
/dev/hdisk5
20480
20480
UNIX Software Service Enablement Copyright IBM Corporation 2011
Notes:
Uempty
# lscfg -l hdisk3
hdisk3 U789C.001.DQDC383-P1-C4-T1-W500507630E01FC30-
L4010409100000000 MPIO Other FC SCSI Disk Drive
Notes:
You should be able to identify the LUN assignment from the storage subsystem to AIX
using the lscfg command. This visual example shows three hdisk devices that are
provided by multiple storage servers (hdisk1 is a virtual SCSI disk provided by a VIO
server). One of the hdisk devices is provided by a DS6000 server, and this device shows a
LUN ID of 4. This is because the DS6000 groups individual logical drives (volumes) into
volume groups, and assigns them together. So, the address LUN 4 represents this volume
group.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Initiator
## lsattr
lsattr -El
-El fscsi0
fscsi0
attach
attach switch
switch How
How this
this adapter
adapter is
is CONNECTED
CONNECTED False
False
dyntrk
dyntrk yes
yes Dynamic
Dynamic Tracking
Tracking of
of FC
FC Devices
Devices True
True
fc_err_recov
fc_err_recov fast_fail
fast_fail FC
FC Fabric
Fabric Event
Event Error
Error RECOVERY
RECOVERY Policy
Policy True
True
.. .. ..
hdisk
## lsattr
lsattr -El
-El hdisk5
hdisk5
PCM
PCM PCM/friend/fcpother
PCM/friend/fcpother Path
Path Control
Control Module
Module False
False
algorithm
algorithm fail_over
fail_over Algorithm
Algorithm True
True
clr_q
clr_q no
no Device
Device CLEARS
CLEARS its
its Queue
Queue on
on error
error True
True
dist_err_pcnt
dist_err_pcnt 00 Distributed
Distributed Error
Error Percentage
Percentage True
True
dist_tw_width
dist_tw_width 50
50 Distributed
Distributed Error
Error Sample
Sample Time
Time True
True
hcheck_cmd
hcheck_cmd test_unit_rdy
test_unit_rdy Health Check Command
Health Check Command True
True
.. .. ..
Notes:
Using the lsattr command, you can identify attributes of various resources under AIX.
This visual shows three key elements within the storage fabric within the Power server,
fcsX, fscsiX, and hdiskX. The third column reports whether the attribute can be modified
by the user (True).
Uempty
## lspath
lspath -l
-l hdisk5
hdisk5 -F'status
-F'status name
name path_id
path_id parent
parent connection'
connection'
Enabled hdisk5 0 fscsi0 500507630e01fc30,4011406200000000
Enabled hdisk5 0 fscsi0 500507630e01fc30,4011406200000000
Enabled
Enabled hdisk5
hdisk5 11 fscsi0
fscsi0 500507630e03fc30,4011406200000000
500507630e03fc30,4011406200000000
Enabled
Enabled hdisk5
hdisk5 22 fscsi1
fscsi1 500507630e81fc30,4011406200000000
500507630e81fc30,4011406200000000
Enabled
Enabled hdisk5
hdisk5 33 fscsi1
fscsi1 500507630e83fc30,4011406200000000
500507630e83fc30,4011406200000000
## lspath
lspath -s
-s failed
failed
Failed
List failed path(s)
Failed hdisk1 fscsi0
hdisk1 fscsi0
Failed
Failed hdisk2
hdisk2 fscsi0
fscsi0
Notes:
The lspath command will show known paths between AIX and the storage server. If a link
has encountered a problem it will be placed in the failed state. The WWPN for disk drives
being provided by the storage subsystem can also be identified with the lspath
command.
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Checkpoint
1. IBM offers a network storage solution via the ________
product family.
2. The storage servers in our lab support the following top
speeds:
DS4300 _______
DS6800 _______
n3400 _______
Notes:
Uempty
Notes:
Copyright IBM Corp. 2011 Unit 1. SAN with Power Systems Overview 1-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Unit Summary
IBM offers a wide range of disk storage systems that work
with AIX Power Systems
A storage network can be setup via physical boundaries, or
logical boundaries
Standard AIX commands such as lscfg, lsdev, or lsattr
play a vital role in SAN device management
Notes:
References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
http://www.ibm.com/developerworks/wikis/display/
WikiPtype/AIXV53SANBoot
AIXV53SANBoot
http://www.wmduszyk.com/?p=3730&langswitch_lang=en
AIX and SAN Boot? Sure!
ftp://index.storsys.ibm.com/subsystem/aix/2.1.0.3/rd_sddpcm_aix.txt
SDDPCM Readme
Unit Objectives
After completing this unit, you should be able to:
Interpret AIX boot strategies using SAN
Identify resources required to complete a SAN boot
Configure a SAN attached hdisk as a boot device
Notes:
Uempty
Notes:
In 2002, IBM introduced support for the ability to boot the pSeries and xSeries systems
directly from SAN-based storage. In most cases, the computer (xSeries and pSeries), the
SAN HBA, the SAN switches and the storage arrays must conform to the latest firmware
levels. Once the firmware is current, the process of booting from the SAN is quite simple.
Boot from SAN - otherwise known as remote boot or root boot - refers to the server
configuration where the server OS is installed on a LUN that doesn't reside inside the
server chassis. Boot from SAN utilizes drives located in a disk-storage subsystem that are
connected via a HBA located in the server chassis.
Of course, different SAN storage servers call virtual disks by various names. For
consistency, we will refer to this process as booting from a SAN disk for the remainder of
this discussion.
Notes:
Better I/O performance due to caching and striping across multiple spindles
- Depending on the storage server, a LUN can be spread over many physical
volumes, something you would probably not do with a locally attached disk for
rootvg. Todays storage server provides high levels of cache to improve
performance even further.
Availability with built in RAID protection
- While it is possible to configure a storage server without some level of RAID
protection, it is highly unlikely a customer would choose to do this.
Ability to easily redeploy disk
- Because the disk is not attached to a specific server, you can redeploy the AIX
instance by re-assigning the LUN to another platform. Of course this will bring into
consideration various configuration issues, but if you have a basic AIX rootvg
configured, it can be easily moved.
Ability to FlashCopy the rootvg for backup
- If your storage server provides advanced backup functions, you can maintain a
backup strategy outside of the AIX tools.
Uempty
Notes:
There is almost always an opposite affect to a process. While booting from a SAN disk may
make perfect sense, there could be issues to consider. Of course, measuring the validity of
these will bring to mind other questions. For example, if you are unable to access the SAN
it is true you will not be able to boot your system. However, if your data is also located on
the SAN you face even larger issues. Also, configuring a dump space is a consideration,
though when is the last time you needed to analyze a dump report?
Storage server
Notes:
To successfully boot from a SAN disk, each of these should be considered. Are there
firmware considerations? Is the fabric zoned correctly to allow for your device?
Uempty
Notes:
There is no requirement to install AIX directly to a SAN disk for this boot process to work. If
you already have AIX installed to a local disk, you can move rootvg to a SAN disk using the
alt_disk_copy command.
alt_disk_copy Command
Create a copy of current rootvg
## alt_disk_copy
alt_disk_copy -d
-d hdisk4
hdisk4
Calling
Calling mkszfile to
mkszfile to create
create new
new /image.data
/image.data file.
file.
Checking disk sizes.
Checking disk sizes.
Creating
Creating cloned
cloned rootvg
rootvg volume
volume group
group and
and associated
associated logical
logical volumes.
volumes.
Creating
Creating logical
logical volume
volume alt_hd5
alt_hd5
.. .. ..
forced
forced unmount
unmount of
of /alt_inst
/alt_inst
Changing
Changing logical
logical volume
volume names
names in
in volume
volume group
group descriptor
descriptor area.
area.
Fixing
Fixing LV
LV control
control blocks...
blocks...
Fixing
Fixing file
file system
system superblocks...
superblocks...
Bootlist
Bootlist is set to the boot
is set to the boot disk:
disk: hdisk4
hdisk4 blv=hd5
blv=hd5
## lspv
lspv
hdisk0
hdisk0 000db98104bd16c5
000db98104bd16c5 rootvg
rootvg active
active
hdisk1
hdisk1 000db98150f86937
000db98150f86937 altinst_rootvg
altinst_rootvg
.. .. ..
Notes:
The visual above shows an example of performing the alt_disk_copy command to create
a copy of rootvg found on hdisk0 on hdisk1. What is not shown in this example is the
process of confirming that hdisk1 is in fact the SAN disk you want to boot from ultimately.
This process requires you to reboot the system, which will make hdisk1 the new rootvg,
and hdisk0 old_rootvg.
Uempty
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation Keys:
Keys:
XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:
Notes:
The next series of visuals deals with the process of selecting a SAN disk to boot from.
Our first menu is the primary SMS menu, and we will select option #5 Select Boot
Options.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:
Notes:
Our next menu will select the boot device.
Uempty
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:
Notes:
From this menu we select option # 5. A SAN disk will appear to a Power system the same
as a locally attached disk.
Notes:
The quickest way to find a SAN disk to boot from is actually option #9, since it will look for
all bootable devices. For discussion purposes we will be discussing option #3, so we can
see additional SMS menu screens.
Uempty
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:
Notes:
This menu shows available FC HBAs installed on your system. This example shows a
single dual-ported adapter in slot C1.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Navigation
Navigation keys:
keys:
MM == return
return to
to Main
Main Menu
Menu
ESC
ESC key
key == return
return to
to previous
previous screen
screen XX == eXit
eXit System
System Management
Management Services
Services
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Type
Type menu
menu item
item number
number and
and press
press Enter
Enter or
or select
select Navigation
Navigation key:
key:
Notes:
Any disk that the Power system firmware detects to have a boot record will be identified in
this menu. In our example we see a virtual SCSI disk (device #1), and we also see two
other disks. These additional disks are attached via SAN, though there is no direct
comment to this on the screen. We know this because the location codes provided are
WWNs of two different SAN storage servers. If we have documented out SAN correctly, we
can find the correct device.
Uempty
Boot Message
Identify where boot device is
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM IBM
IBM
||
Elapsed
Elapsed time
time since
since release
release of
of system
system processors:
processors: 63293
63293 mins
mins 11 secs
secs
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Welcome
Welcome to
to AIX.
AIX.
boot
boot image
image timestamp:
timestamp: 20:00
20:00 02/03
02/03
The
The current
current time
time and
and date:
date: 20:11:18
20:11:18 02/03/2011
02/03/2011
processor
processor count:
count: 2;
2; memory
memory size:
size: 1024MB;
1024MB; kernel
kernel size:
size: 35064581
35064581
boot
boot device:
device: /pci@800000020000204/fibre-channel@0/disk@200c00a0b80fa822
/pci@800000020000204/fibre-channel@0/disk@200c00a0b80fa822
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Notes:
The AIX splash screen will show you the source of AIX. In our example above we are
booting from a device at address
/pci@800000020000204/fibre-channel@0/disk@200c00a0b80fa822.
Checkpoint
1. What two ways can AIX can be installed on a SAN disk?
____________________________________________________
____________________________________________________
Notes:
Uempty
Notes:
Unit Summary
Booting AIX from a SAN disk provides much flexibility to a
customer
Ability to easily copy and deploy AIX instances
Multiple AIX instances can exist and be chosen from
You can either load directly to a SAN disk, or copy an existing
AIX image to a SAN disk
Notes:
References
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp
Virtual I/O Server product documentation from the IBM
Systems Hardware Information Center
http://techsupport.services.ibm.com/server/vios
Virtual I/O Server Support
http://www.ibm.com/developerworks/systems/articles/DLPARchecklist.
html
Dynamic LPAR tips and checklists
SG24-7940 PowerVM Virtualization on IBM System p Introduction
and Configuration
Uempty
Unit Objectives
After completing this unit, you should be able to:
Discuss VIO server architecture as it relates to Fibre Channel
Describe how to configure virtual Fibre Channel adapters
Utilize VIO server commands to identify Fibre Channel
resources
Differentiate between physical and virtual Fibre Channel
resources
Notes:
Notes:
Uempty
VP VP VP VP VP VP VP VP VP VP
Hypervisor
Shared Processor Pool
core core core core core core
Hardware
Notes:
IBM Power servers utilize a hypervisor to create virtual resources to various operating
systems. As the visual example above shows, the hypervisor enables core functions, like
processing, to be virtualized.
Notes:
To virtualize storage resources, a Virtual I/O (VIO) server is utilized. This specialized
operating systems sole purpose is to provide a virtualization path for storage and
networking within a Power server.
The visual above shows an example of configuring virtual SCSI paths from the VIO server
to three different logical partitions (LPARs). In this example, we have assigned either a
SCSI or Fibre Channel HBA to the VIO server, which then owns the physical disks. These
physical devices can then be assigned either whole or in part as virtual resources to LPAR.
Uempty
Notes:
Another method of providing storage resources via the VIO server is through virtual Fibre
Channel (vFC) devices. This method moves the storage device ownership to the LPAR by
assigning a virtual HBA to the LPAR, instead of sharing a disk owned by the VIO server.
We will discuss virtual Fibre Channel in greater detail in the next topic.
login
login as:
as: padmin
padmin
padmin@9.47.87.85's
padmin@9.47.87.85's password:
password:
Last
Last unsuccessful
unsuccessful login:
login: Mon
Mon Mar
Mar 14
14 13:49:23
13:49:23 PDT
PDT 2011
2011 on
on /dev/vty0
/dev/vty0
Last
Last login:
login: Mon
Mon Mar
Mar 14
14 13:55:24
13:55:24 PDT
PDT 2011
2011 on
on /dev/vty0
/dev/vty0
$$ lsdev
lsdev |grep
|grep fcs
fcs
fcs0
fcs0 Available
Available 8Gb
8Gb PCI
PCI Express
Express Dual
Dual Port
Port FC
FC Adapter
Adapter (df1000f114108a03)
(df1000f114108a03)
fcs1
fcs1 Available
Available 8Gb
8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
PCI Express Dual Port FC Adapter (df1000f114108a03)
Notes:
You can manage your VIO server from either a GUI, or CLI. Each method provides its own
advantages and disadvantages.In the case of the CLI, you have the VIO server interface,
or you can access an AIX shell via the oem_setup_env command.
Uempty
Discovering Devices
From the VIO server prompt, use the cfgdev command
Similar to the cfgmgr command under AIX
$$ cfgdev
cfgdev
Some
Some error
error messages
messages may
may contain
contain invalid
invalid information
information
for
for the Virtual
the Virtual I/O
I/O Server environment.
Server environment.
Method
Method error
error (/etc/methods/cfgpcmui
(/etc/methods/cfgpcmui -l
-l dac0
dac0 ):
):
0514-082
0514-082 The requested function could
The requested function could only
only be
be performed
performed for
for some
some
of the specified paths.
of the specified paths.
.. .. ..
Notes:
Before you can use a Fibre Channel resource, you need to verify it is configured. If there is
no fcs device, you need to discover the resource. This can be done with the cfgdev
command. You can also access the AIX shell to run the cfgmgr command.
While this process happens when a system is booted, with many generations of Fibre
Channel HBAs the laser is deactivated if no devices are found downstream. Running the
cfgdev command will re-activate the laser, and attempt to re-acquire any storage servers
on the network.
$$ lsdev
lsdev -type
-type adapter
adapter Device currently active
name
name status
status description
description
.. .. ..
fcs0
fcs0 Available
Available 8Gb
8Gb PCI
PCI Express
Express Dual
Dual Port
Port FC
FC Adapter
Adapter (df1000f114108a03)
(df1000f114108a03)
fcs1
fcs1 Available
Available 8Gb
8Gb PCI
PCI Express
Express Dual
Dual Port
Port FC
FC Adapter
Adapter (df1000f114108a03)
(df1000f114108a03)
fcs2
fcs2 Defined
Defined 4Gb
4Gb FC
FC PCI
PCI Express
Express Adapter
Adapter (df1000fe)
(df1000fe)
fcs3
fcs3 Defined
Defined 4Gb
4Gb FC
FC PCI
PCI Express
Express Adapter
Adapter (df1000fe)
(df1000fe)
.. .. ..
Notes:
The lsdev command, similar to under AIX, will provide you a list of known devices. If the
device shows as Available, you can utilize the device. If the device shows as Defined, it
was known of once, but it not currently responding to requests.
Uempty
Manufacturer................IBM
Manufacturer................IBM LUN
Machine
Machine Type
Type and
and Model......1722-600
Model......1722-600
ROS
ROS Level and ID............30393134
Level and ID............30393134
Serial
Serial Number...............
Number............... Machine
Device
Device Specific.(Z0)........0000053245004032
Specific.(Z0)........0000053245004032 Type
Notes:
Using the lsdev command can also be used to show known storage devices. The output
will show both physical and virtual devices.
Notes:
Uempty
Notes:
Notes:
Uempty
$$ lsdev
lsdev -virtual
-virtual -type
-type adapter
adapter
name
name status
status description
description
vasi0
vasi0 Available
Available Virtual
Virtual Asynchronous
Asynchronous Services
Services Interface
Interface (VASI)
(VASI)
vbsd0
vbsd0 Available
Available Virtual Block Storage Device (VBSD)
Virtual Block Storage Device (VBSD)
vfchost0
vfchost0 Defined
Defined Virtual
Virtual FC
FC Server
Server Adapter
Adapter
vfchost1 Available
vfchost1 Available Virtual
Virtual FC Server Adapter
FC Server Adapter
vhost0
vhost0 Available
Available Virtual
Virtual SCSI
SCSI Server
Server Adapter
Adapter
vhost1
vhost1 Available
Available Virtual
Virtual SCSI
SCSI Server
Server Adapter
Adapter
vsa0
vsa0 Available
Available LPAR
LPAR Virtual
Virtual Serial
Serial Adapter
Adapter
Notes:
Notes:
Uempty
Notes:
Notes:
Uempty
Notes:
This industry standard method allows multiple initiators to share a single physical port,
easing hardware requirements for Storage Area Networks. In the PowerVM case, the VIOS
partition will have the actual physical connection.
The host bus adapter (HBA) is the name used for the Fibre Channel adapter.
A World wide port name (WWPN) is the unique identifier for a port on a Fibre Channel
fabric. Think of it as being similar to a Media Access Control (MAC) address for Ethernet
ports.
Notes:
With NPIV, the VIOSs role is fundamentally different. The VIOS facilitates adapter sharing
only; there is no device level abstraction or emulation. Rather than a storage virtualizer, the
VIOS serving NPIV is a pass through, providing a Fibre Channel Port (FCP) connection
from the client to the SAN.
If you use VSCSI disks, you need to use the VIOS to create each VSCSI VTD and map
each to the vhost adapter for each client. With NPIV, the SAN can zone storage to the client
LPARs WWPN. You do not need to create the individual VTDs on the VIOS for each virtual
disk. This reduces the amount of VIOS management needed.
Just like virtual SCSI, when configuring the virtual Fibre Channel adapters you specify the
VIOS partition name and virtual adapter ID in the client configuration, and you can specify
the client information when configuring the VIOS server adapter.
Uempty
Notes:
Not all Fibre Channel adapters and SAN switches support NPIV. NPIV-capable switches
present the virtual WWPN to other SAN switches and devices as if they represent physical
FC adapter endpoints. Additional SAN switches and disk/tape devices dont need to be
NPIV aware.
The dynamic remapping feature is for ease of maintenance. For example, if you need to
replace an adapter, you can dynamically remap the virtual server adapter(s) to another
physical adapter without interrupting the client I/O operations.
Notes:
Uempty
Notes:
The visual above shows the HMC GUI to create the virtual Fibre Channel adapter. Just like
virtual SCSI adapters, it can be created in the Create LPAR wizard, in a partitions profile,
or dynamically.
Notice that just like for VSCSI adapters, the client must specify the VIOS partition name
and adapter ID. The VIOS virtual adapter will also point to the client virtual adapter. There
is always a one-to-one relationship between client adapters and server adapters.
Multiple virtual server adapters can be created for a single physical Fibre Channel port.
Notes:
View added virtual Fibre Channel adapter
The virtual Fibre Channel adapter uses a virtual adapter slot like other virtual adapters and
its properties can be seen in the partition properties or the partition profile properties.
Notice the fcs device naming convention in the lsdev output.
Uempty
Notes:
WWPNs are generated based on the range of names available for use based on a prefix in
the vital product data on the managed system. This 6 digit prefix comes with the purchase
of the managed system and includes the ability to use 32,000 pairs of WWPNs. (If you run
out of WWPNs, you must obtain an activation code for an additional set of 32,000 pairs.)
When adding a virtual Fibre Channel adapter, a pair of WWPNs is generated and assigned.
If you add the adapter with a DLPAR operation and later you shutdown the partition, the
WWPN pair that was assign cannot be reused. A new WWPN will be assigned if you add
the virtual Fibre Channel adapter back. To not waste the WWPNs, use the Save Current
Configuration task to save the current configuration after a DLPAR operation in which a
virtual Fibre Channel adapter is added. This creates a new profile and you will have to give
it a unique name. Later, you can rename the profiles if desired or delete profiles that are no
longer needed.
Notes:
The lsnports command will list NPIV capable ports on your system. Use these
arguments to expand the command output:
- fabric shows whether the port has fabric support (1)
- tports is total number of NPIV ports
- aports is number of available NPIV ports
- swwpns is total number of target worldwide port names supported
- awwpns is the number of target worldwide port names available
If no parameter is specified after the -fcp flag in the vfcmap command, the command
un-maps the virtual Fibre Channel adapter from the physical Fibre Channel port. For
example: $ vfcmap -vadapter vfchost0 -fcp
Uempty
Status:LOGGED_IN
FC name:fcs0 FC loc code:U789C.001.DQD1760-P1-C2-T1
Ports logged in:2
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:fcs1VFC client DRC:U8203.E4A.10D4461-V15-C6-T1
Notes:
To view NPIV virtual mapping, use the lsmap command.
The -npiv flag to lsmap is used to display the server binding information between the
virtual Fibre Channel and the physical Fibre Channel adapter. It is also used to display
client adapter attributes that are being sent to the server adapter.
The VFC client DRC means the virtual Fibre Channels client Dynamic Reconfiguration
Connection (DRC) identification.
Checkpoint
1. The VIO server command will configure
attached devices and make them available.
2. To list all configured disks under the VIO server, use the
command .
3. True/False: To use NPIV, you must use an 8 Gb Fibre
Channel HBA assigned to the VIO server.
4. When you use NPIV, the virtual Fibre Channel HBA is
provided a unique from the Hypervisor
5. To view available NPIV ports, use the
command
Notes:
Uempty
Notes:
Unit Summary
The VIO server provides virtual resources, including Fibre
Channel devices, to logical partitions
The configuration of Fibre Channel devices within a VIO
server is no different than under AIX
You can use VIO server commands, or AIX commands to
configure and manipulate Fibre Channel devices
Node Port ID Virtualization (NPIV) enables virtual Fibre
Channel adapters for logical partitions
Notes:
References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
http://www-03.ibm.com/systems/bladecenter/hardware/servers/
BladeCenter Servers
http://www-03.ibm.com/systems/bladecenter/hardware/openfabric/
fibrechannel.html
BladeCenter Open Fabric
Unit Objectives
After completing this unit, you should be able to:
Describe IBM BladeCenter architecture
Define IBM BladeCenter SAN I/O structure
Navigate Integrated Virtualization Manager
Identify physical and virtual Fibre Channel devices
Notes:
Notes:
Uempty
BladeCenter Environment
BladeCenter S
BladeCenter H
BladeCenter E
BladeCenter T
BladeCenter HT
Notes:
Beginning with the original BladeCenter chassis (the BC-E), the IBM BladeCenter product
family has offered customers a small footprint solution that support multiple processor and
I/O architectures, as well as multiple operating systems.
- BladeCenter S (BC-S) - Small office solution. Supports up to 12 locally attached disk
drives, enabling a small form factor server farm.
- BladeCenter E (BC-E) - First generation Blade Server platform.
- BladeCenter T (BC-T) - BC-E chassis modified for Telecomm industry.
- BladeCenter H (BC-H) - Second generation chassis. Supports all Blade Servers.
- BladeCenter HT (BC-HT) - BC-H chassis modified for Telecomm industry.
BladeCenter
Chassis, I/O Module
Switches
Hosts
Routers
Gateways
etc.
Notes:
To access a Fibre Channel network from a Power Blade, you will need two key
components. The first is an expansion card that is installed into the Power Blade. The
second component is an I/O Module that is installed in the BladeCenter chassis. Once you
have these in place, you can configure an I/O path.
Uempty
Power 7
P700
P701
p702
Notes:
The Power Blades available at the time of this writing are shown in this visual. Offered in
both Power 6 and Power 7 processor models, from 2 to 32 core configurations. The visual
above shows both single and dual slot models (32-core models require 2 slots).
I/O Modules
Switch
Brocade 8 Gb 10 and 20 port
Cisco 4 Gb 10 and 20 port
QLogic 8 Gb 20 port
Pass-thru
QLogic
Notes:
You can order a Power Blade with either a 4 Gb or 8 Gb expansion card. Expansion cards
are available in two different types. Each type activates a different type of I/O Module.
Fibre Channel I/O Modules are either a switch module (which can be connected directly to
a host, or other switches), or a pass-thru module (which can connect directly to a host in a
point-to-point topology), or to a switch to provide a fabric topology.
To get MPIO functionality to a Blade server you need two I/O Modules.
Note: CIOv form factor connects to primary I/O Modules, and CFFh form factor
connects to high speed I/O Modules
For more information, refer to:
http://www-03.ibm.com/systems/bladecenter/hardware/openfabric/fibrechannel.html
Uempty
Notes:
This visual depicts the GUI logon screen to an Advanced Management Module (AMM) of a
BladeCenter chassis.
Notes:
You can gather Vital Product Data (VPD) about your Power blade via the Monitors ->
Hardware VPD menu option.
Uempty
Notes:
BladeCenter Open Fabric Manager is part of a comprehensive management solution for
IBM BladeCenter. It simplifies blade administration and provides SAN/LAN management,
including virtualized I/Othe simplification of I/O addressing and failover. BladeCenter
Open Fabric Manager is the management tool that makes it simple to get the most from
your I/O. The suite runs on the Advanced Management Module so you get a single
interface for both server administration and SAN/LAN administration. Open Fabric
Manager is suitable from SMB to enterprise. It works with all BladeCenter Ethernet and
Fibre Channel switches and fabricsCisco, Nortel Brocade and QLogicand can help
reduce the time it takes you to deploy servers, data and storage to minutes or hours
instead of days or weeks.
Notes:
You can deactivate external ports on an I/O Module via the Advanced Setup menu option.
This is an easy trap to fall into, so when troubleshooting we always make sure we have not
mistakenly turned the external ports off!
Note: The menu option deals with external ports. Remember that the I/O Module also
has internal ports, since it connects to any expansion card installed in up to 14 Blade
Servers. There is no option to turn those ports off.
Uempty
Notes:
You can access an I/O Module for maintenance or configuration either via the AMM
interface, or in some cases directly. In the case of switch modules, you can configure an IP
address for the module, and then access it directly.
Notes:
If firmware can be updated on a I/O Module, you may need to perform this process
occasionally. Some Fibre Channel modules will require firmware updates.
Notes:
Uempty
Notes:
The same operating systems you are familiar with on stand-alone Power Servers also work
on Power Blades.
Notes:
Blade servers do not attach to a HMC platform. Since we may need a way to leverage the
virtualization capability of the server, we would load the VIO Server operating system
directly to the Power Blade (VIO Server can be loaded directly to a stand-alone Power
Server as well, though it is only common on the smallest server). Once loaded, you can
configure LPARs, and load AIX, Linux, or IBM I.
Uempty
Notes:
In this visual we see the properties of a physical Fibre Channel adapter. This is actually the
expansion card we looked at in the previous topic. The I/O Module is down-stream from the
Blade Server, so it does not appear in this configuration.
Notes:
If you configure NPIV, and assign a virtual fcs device to a logical partition, you can view the
properties from the Storage tab. In this visual, you can see there are two WWPN groups,
though only one is assigned to an physical adapter.
Uempty
Notes:
Our previous visual showed the properties of a specific LPAR, and the virtual WWPNs
assigned. You can gather this information via another main menu option. In this visual we
see the View Virtual Fibre Channel option, and the sub-menu it provides.
Note: There is no menu option to see LUN devices that are configured to these
WWPNs. To gather that information you will need to access a terminal window, and use
a respective CLI tool.
Part
Part Number.................46M6142
Number.................46M6142
Serial
Serial Number...............11S46M6142YK50200338BS
Number...............11S46M6142YK50200338BS
EC Level....................A
EC Level....................A
Customer
Customer Card
Card ID
ID Number.....2B3A
Number.....2B3A
Manufacturer................001B
Manufacturer................001B
FRU
FRU Number..................46M6138
Number..................46M6138
Device
Device Specific.(ZM)........3
Specific.(ZM)........3
Network
Network Address.............10000000C9923334
Address.............10000000C9923334
.. .. ..
Hardware
Hardware Location
Location Code......U78A5.001.WIHA986-P1-C11-T1
Code......U78A5.001.WIHA986-P1-C11-T1
Notes:
If you have loaded AIX to your Power Blade, you will not need to learn new commands to
gather information. In this visual we see the standard lscfg command, and output you
should be familiar with.
Uempty
Network
Network Address.............C05076037D160000
Address.............C05076037D160000
ROS
ROS Level
Level and
and ID............
ID............
.. .. ..
Device
Device Specific.(Z4)........
Specific.(Z4)........
Device
Device Specific.(Z5)........
Specific.(Z5)........
Device
Device Specific.(Z6)........
Specific.(Z6)........
Device Specific.(Z7)........
Device Specific.(Z7)........
Device
Device Specific.(Z8)........C05076037D160000
Specific.(Z8)........C05076037D160000
Device
Device Specific.(Z9)........
Specific.(Z9)........
Hardware
Hardware Location
Location Code......U8406.70Y.06CAEBA-V2-C5-T1
Code......U8406.70Y.06CAEBA-V2-C5-T1
Notes:
When looking at a virtual fcs device, the big difference we see is there is far less
information provided. What we care most about though is the WWPN (the Network
address).
Show.Port>
Show.Port> node
node 17
17
Port
Port BB Crdt RxFldSz
BB Crdt RxFldSz COS
COS Port
Port Name
Name Node
Node Name
Name
----
---- -------
------- -------
------- ---
--- -----------------------
----------------------- -----------------------
-----------------------
17
17 20
20 2048
2048 2-3
2-3 10:00:00:00:C9:92:33:34
10:00:00:00:C9:92:33:34 20:00:00:00:C9:92:33:34
20:00:00:00:C9:92:33:34
17
17 20
20 2048
2048 2-3
2-3 C0:50:76:03:7D:16:00:00
C0:50:76:03:7D:16:00:00 C0:50:76:03:7D:16:00:00
C0:50:76:03:7D:16:00:00
Notes:
This visual shows an example of a fabric switch, and how the NPIV setting is enabled. In
this example, port 17 is NPIV enabled. When we look at port 17 specifically, we see not
only the physical adapter WWPNs, but also the virtual WWPNs.
Uempty
Checkpoint
1. True/False: You need to install an expansion card for Fibre
Channel connectivity to a Blade Server.
2. What two I/O modules options are available to provide
Fibre Channel access to a Blade Server?
___________________________________________
___________________________________________
Notes:
Notes:
Uempty
Unit Summary
IBM BladeCenter provides an excellent platform for many
technologies, including Fibre Channel
From a Power Blade, you can take advantage of todays Fibre
Channel options, like 8 Gb speed and NPIV functionality
Notes:
References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
TD105745 IBM Technical Document: AIX Disk Queue Depth
Tuning for Performance
Unit Objectives
After completing this unit, you should be able to:
Identify the I/O layers where queuing is handled
View and change a FC disk and FC disk adapters tuning
attributes
Monitor and tune the queue depth of disks and disk adapters
Identify the filemon reports that display I/O activity
Test I/O throughput using:
The time and dd commands
The ndisk program (part of the nstress package)
Notes:
Uempty
Introduction
Performance tuning has changed over time
From simple locally attached disk performance issues
Toward evaluation of more complex interactions and configurations,
including Fibre Channel and storage arrays
Virtual storage on the AIX client can be manipulated using LVM just
like a normal physical disk
Caution: The virtual disk on the client may already be a logical
volume on the server
LVM features such as mirroring and striping may be implemented on
the client
Performance considerations using dedicated storage still apply to
using virtual storage, such as spreading out hot logical volumes
The client needs to know what types of backing storage it is using to
make informed decisions
Notes:
Storage from a storage array or VIO server can be manipulated using the Logical Volume
Manager (LVM) just like a physical volume. The adapter can use these devices like any
other physically connected hdisk device for boot, swap, mirror, or any other supported AIX
feature.
Performance considerations from dedicated storage are still applicable when using virtual
storage, such as spreading hot logical volumes across multiple disks on multiple adapters
so that parallel access is possible.
Notes:
There are many trade-offs related to performance tuning that should be considered. The
key is to ensure there is a balance between them.
The trade-offs include:
Cost versus performance: In some situations, the only way to improve performance is
by using more or faster hardware.
Conflicting performance requirements: There may be conflicting performance
requirements between applications running in multiple LPARs.
Speed versus functionality: Resources added for redundancy or availability may
adversely affect performance. In addition, resources added to improve one
performance bottleneck (example: more Ethernet adapters) may adversely affect
another area (example: consumes more CPU).
Baseline values provide data for comparison later when performance tuning is needed.
Collections over time may show trends to determine when future tuning may be needed.
Uempty
Notes:
CPU utilization can be split into %user, %system, %idle, and %IOwait. Other CPU metrics
can include the length of the run queues, process/thread dispatches, interrupts, and lock
contention statistics.
Memory metrics include virtual memory paging statistics, file paging statistics, and cache
and TLB miss rates.
Disk metrics include disk throughput (kilobytes read/written), disk transactions
(transactions per second), disk adapter statistics, disk queues (if the device driver and tools
support them), and elapsed time caused by various disk latencies. The type of disk access,
random versus sequential, can also have a big impact on response times.
Network metrics include network adapter throughput, protocol statistics, transmission
statistics, network memory utilization, and much more.
You should create a baseline measurement when your system is running well and under a
normal load. This will give you a guideline to compare against when your system seems to
have performance problems.
Notes:
A functional problem is when the application, hardware or network is not behaving properly.
A performance problem is when the functions are being achieved but the performance is
slow. Sometimes functional problems lead to performance problems. In these cases, rather
than tune the system, it is more important to determine the root cause of the problem and
fix it.
It is quite common for support personnel to receive a problem report in which all it says is
that someone has a performance problem on the system and here is some data for you to
analyze. This little information is not enough to accurately determine the nature of a
performance problem.
It is important to collect a variety of data that show statistics regarding the various system
components. In order to make this easy, a set of tools supplied in a package called
PerfPMR is available on a public ftp site. The following URL can be used to download your
version using a web browser:
ftp://ftp.software.ibm.com/aix/tools/perftools/perfpmr
Uempty
Application
Raw Disks
Raw LVs
JFS JFS2 NFS Other
VMM
LVM
Disk
Notes:
Application memory area caches data to avoid I/O.
NFS caches file attributes and has a cached file system for NFS clients.
JFS and JFS2 cache use extra system RAM. JFS uses persistent pages for cache and
JFS2 uses client pages for cache.
Queues exist for both adapters and disks.
Adapter device drivers use Direct Memory Access (DMA) for I/O.
Disk subsystems have read and write caches.
Disks have memory to store commands and data.
I/O operations can be coalesced into fewer I/Os, or broken up into more I/Os as they go
through the I/O stack. I/Os adjacent in a file, logical volume, and disk can be coalesced.
I/Os greater than the maximum I/O size supported will be split up.
LVM Terminology
Application Raw
JFS/JFS2
Layer Logical Volume
Volume
Group Logical Volume Logical Volume
Physical
Layer
Physical Physical
Physical Disk Disk
Disk
Array
Notes:
The logical volume layer is between the application and physical layers. The application
layers are the file system or raw logical volumes. The physical layer are the physical disks.
LVM maps the data between application layer and physical storage. Even physical volumes
are part of the logical layer, as the physical layer only contains the actual disks, device
drivers, and disk arrays that may already be configured.
The physical disk drives, storage arrays or virtual disks are named as a physical volumes in
LVM. All of the physical volumes in a volume group are divided into physical partitions. All
the physical partitions within a volume group are the same size, although different volume
groups can have different physical partition sizes. A volume group is made up of one or
more physical volumes. Within each volume group, one or more logical volumes are
defined. Logical volumes are groups of information located on physical volumes. Each
logical volume consists of one or more logical partitions. Logical partitions are the same
size as the physical partitions within a volume group. Each logical partition is mapped to
one, two or three physical partitions.
Uempty
Queuing I/Os
I/Os are queued to improve throughput
Notes:
I/Os are queued to improve throughput. If there was a virtual disk (or LUN) is backed by
multiple physical disks and only one I/O could be submitted at a time, the I/O service time
would be good but the throughput would be poor.
By submitting multiple I/Os to a physical disk, the disk can minimize actuator movement
and get better throughput than is possible by submitting one I/O at a time.
If the number of I/O requests exceed the allowed limit, they will reside in a wait queue until
the resource becomes available. The I/Os being serviced will be in a process queue.
File system layer: buffers limit the maximum number of in flight I/Os for each file system
LVM layer: disk buffers limit the number of in flight I/Os
Multipath I/O layer: I/Os are queued if the device driver allows it
Disk device driver: maximum number of in flight I/Os defined by queue_depth attribute
FC adapters: maximum number of in flight I/Os are specified by num_cmd_elems
attribute
Disk subsystem: queue I/Os themselves
Disk: can accept multiple I/O requests
Notes:
The operating system has the ability to enforce limits on the number of I/O requests that
can be outstanding from the SCSI adapter to a given SCSI bus or disk drive. These limits
are intended to exploit the hardware's ability to handle multiple requests while ensuring that
the seek-optimization algorithms in the device drivers are able to operate effectively.
For non-IBM devices, it is sometimes appropriate to modify the default queue-limit values
that have been chosen to handle the worst possible case.
The default queue_depth and valid values differs by the manufacturer and type of storage.
Uempty
## mount
mount
node
node mounted
mounted mounted
mounted over
over vfs
vfs date
date options
options
--------
-------- ---------------
--------------- ---------------
--------------- ------
------ ------------
------------ ---------------
---------------
...
...
/dev/fslv00
/dev/fslv00 /myfs
/myfs jfs2
jfs2 MayMay 18
18 17:49
17:49 rw,cio,log=/dev/loglv00
rw,cio,log=/dev/loglv00
## umount
umount /myfs
/myfs
## chdev
chdev -l
-l hdisk6
hdisk6 -a
-a queue_depth=16
queue_depth=16
Method
Method error
error (/usr/lib/methods/chgdisk):
(/usr/lib/methods/chgdisk):
0514-062
0514-062 Cannot
Cannot perform
perform the
the requested
requested function
function because
because the
the
specified
specified device
device is
is busy.
busy.
## varyoffvg
varyoffvg testvg
testvg
## chdev
chdev -l
-l hdisk1
hdisk1 -a
-a queue_depth=16
queue_depth=16
hdisk1
hdisk1 changed
changed
## varyonvg
varyonvg testvg
testvg
## mount
mount -o
-o cio
cio /myfs
/myfs
Notes:
When you want to change the queue_depth of a device, the disk must not be in use. You
can use the chdev -P flag and have the new value be in effect on the next boot. If you want
to change it without rebooting, you must close any open logical volumes and varyoff the
volume group.
iostat D
## iostat D hdisk12
hdisk12 33
System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 paths=27
paths=27 vdisks=7
vdisks=7
hdisk12
hdisk12 xfer:
xfer: %tm_act
%tm_act bps
bps tps
tps bread
bread bwrtn
bwrtn
100.0
100.0 2.0M
2.0M 497.3
497.3 1.0M
1.0M 1.0M
1.0M
read:
read: rps
rps avgserv
avgserv minserv
minserv maxserv
maxserv timeouts
timeouts fails
fails
247.7
247.7 4.7
4.7 0.4
0.4 38.6
38.6 00 00
write:
write: wps
wps avgserv
avgserv minserv
minserv maxserv
maxserv timeouts
timeouts fails
fails
249.7
249.7 7.4
7.4 1.0
1.0 48.8
48.8 00 00
queue:
queue: avgtime
avgtime mintime
mintime maxtime
maxtime avgwqsz
avgwqsz avgsqsz
avgsqsz sqfull
sqfull
14.1
14.1 0.0
0.0 42.2
42.2 6.0
6.0 3.0
3.0 497.3
497.3
Notes:
The queue statistics are:
avgtime: Average time (ms) spent by a transfer request in the wait queue
mintime: Minimum time (ms) spent by a transfer request in the wait queue
maxtime: Maximum time (ms) spent by a transfer request in the wait queue
avgwqsz: Average wait queue size
avgsqsz: Average service queue size
sqfull: Number of times the service queue becomes full (that is, the disk is not
accepting any more service requests) per second
If the queue statistics are all zero, the number of I/O operations are not being limited by
queue_depth.so there is no need to adjust queue_depth. If the queue statistics are not
zero, then increase queue_depth unless the average read and write service times (read
avgserv and write avgserv) are higher in intervals where the avgsqsz is higher. In that
case, disk subsystem performance is likely degrading because too many I/O operations are
being driven to the LUN simultaneously and reducing queue_depth will probably improve
performance.
Uempty
Notes:
Increasing the queue depth on a client virtual device reduces the number of supported
open devices on that virtual adapter, and the number of I/O requests that devices can have
active on the VIO server.
The VSCSI queue depth generally should not be any larger than the queue depth on the
physical LUN. A larger value wastes resources without additional performance.
If the virtual target device is a logical volume, the queue depth on all disks included in that
logical volume must be considered. If the logical volume is being mirrored, the virtual SCSI
client queue depth should not be larger than the smallest queue depth of any physical
device being used in a mirror. When mirroring, throughput is effectively throttled to the
device with the smallest queue depth.
If a volume group on the client spans virtual disks, keep the same queue depth on all the
virtual disks in that volume group, especially when using mirroring.
Adapter may have a tunable Direct Memory Access (DMA) memory pool
Parameter name is usually lg_term_dma or lg_dma_mem
## lsattr
lsattr -El
-El fcs0
fcs0
bus_intr_lvl
bus_intr_lvl 273
273 Bus
Bus interrupt
interrupt level
level False
False
bus_io_addr
bus_io_addr 0xff800
0xff800 Bus
Bus I/O
I/O address
address False
False
bus_mem_addr
bus_mem_addr 0xfff7e000
0xfff7e000 Bus
Bus memory
memory address
address False
False
init_link
init_link al
al INIT
INIT Link
Link flags
flags True
True
intr_priority
intr_priority 33 Interrupt
Interrupt priority
priority False
False
lg_term_dma
lg_term_dma 0x800000
0x800000 Long
Long term
term DMA
DMA True
True
max_xfer_size
max_xfer_size 0x100000
0x100000 Maximum
Maximum Transfer
Transfer Size
Size True
True
num_cmd_elems
num_cmd_elems 200
200 Maximum
Maximum number
number of
of COMMANDS
COMMANDS to
to queue
queue to
to the
the adapter
adapter True
True
pref_alpa
pref_alpa 0x1
0x1 Preferred
Preferred AL_PA
AL_PA True
True
sw_fc_class
sw_fc_class 22 FC
FC Class
Class for
for Fabric
Fabric True
True
tme
tme no
no Target
Target Mode
Mode Enabled
Enabled True
True
Notes:
The command queue size is the number of I/O commands that can be queued at the
adapter before the upper layer stops sending them. The attribute name is usually
num_cmd_elems. Generally, the more disk devices attached to the adapter, the larger the
queue size should be.
The DMA memory pool is where the adapter will allocate space from kernel memory when
the adapter is configured. The parameter name is usually lg_dma_mem or lg_term_dma.
The adapter may use a DMA buffer from this pool to send the I/O to the disk. If the pool is
exhausted, the I/O is delayed until a previously issued I/O has completed.
The maximum transfer size is specified by the max_xfer_size attribute. It also controls a
DMA memory area that is used to hold data for transfer. Changing to other allowable
values increases the adapters bandwidth.
Uempty
## fcstat
fcstat fcs0
fcs0
...
...
FC
FC SCSI
SCSI Adapter
Adapter Driver
Driver Information
Information
No DMA Resource Count: 4490
No DMA Resource Count: 4490 <-
<- Increase
Increase max_xfer_size
max_xfer_size
No
No Adapter
Adapter Elements
Elements Count:
Count: 105688
105688 <-
<- Increase
Increase num_cmd_elems
num_cmd_elems
No
No Command
Command Resource
Resource Count:
Count: 133
133 <-
<- Increase
Increase num_cmd_elems
num_cmd_elems
...
...
Notes:
The FC SCSI adapter statistics include:
No DMA Resource Count
Displays the number of times DMA resources were not available.
No Adapter Elements Count
Displays the number of times there were no adapter elements available.
No Command Resource Count
Displays the number of times there were no command resources available.
With SDDPCM, use the pcmpath command. The I/O Maximum field of 200 with
num_cmd_elems=200 means the queue filled.
Basic syntax
filemon -O report-types -o output-file
Runs in the background; stops with the trcstop command
Uses the trace facility
Notes:
If an application is believed to be disk-bound, the filemon utility is useful to find out where
and why.
The filemon command uses the trace facility to obtain a detailed picture of I/O activity
during a time interval on the various layers of file system utilization, including the logical file
system, virtual memory segments, LVM, and physical disk layers. Data can be collected on
all the layers, or some of the layers. The default is to collect data on the virtual memory
segments, LVM, and physical disk layers.
By default, filemon runs in the background while other applications are running and being
monitored. When the trcstop command is issued, filemon stops and generates its report.
The report begins with a summary of the I/O activity for each of the levels (the Most Active
sections) and ends with detailed I/O activity for each level (Detailed sections). Each
section is ordered from most active to least active.
When running PerfPMR, the filemon data is in the filemon.sum file.
Uempty
## dd
dd if=/lv1fs/bigfile1
if=/lv1fs/bigfile1 bs=1M
bs=1M of=/dev/null
of=/dev/null
## trcstop
trcstop
## cat
cat fmon.out
fmon.out
Wed
Wed Feb
Feb 11
11 23:08:09
23:08:09 2009
2009
System:
System: AIX
AIX 6.1
6.1 Node:
Node: leguin221
leguin221 Machine:
Machine: 00066BA2D900
00066BA2D900
Cpu
Cpu utilization:
utilization: 88.9%
88.9%
Cpu
Cpu allocation:
allocation: 0.8%
0.8%
Most
Most Active
Active Files
Files
-----------------------------------------------------------------------
-----------------------------------------------------------------------
#MBs
#MBs #opns
#opns #rds
#rds #wrs
#wrs file
file volume:inode
volume:inode
-----------------------------------------------------------------------
-----------------------------------------------------------------------
101.0
101.0 11 101
101 00 bigfile1
bigfile1 /dev/lv1:21
/dev/lv1:21
100.0
100.0 11 00 100
100 null
null
...
...
Notes:
The visual on this page shows the logical file output (lf) from the filemon report. The
logical file I/O includes read, writes, opens and seeks which may or may not result in actual
physical I/O depending on whether or not the files are already buffered in memory.
Statistics are kept by file.
Output is ordered by #MBs read and/or written to a file.
By default, the logical file reports are limited to the top 20. If the verbose flag (-v) is added,
activity for all files would be reported. The -u flag can be used to generate reports on files
opened prior to the start of the trace daemon.
Look for the most active files to see usage patterns. If they are dynamic files, they may
need to be backed up and restored. The Most Active Files sections shows the bigfile1
file (read by dd command) as most active file with one open and 101 reads.
The number of writes (#wrs) is 1 less than the number of reads (#rds), because end-of-file
has been reached.
Most
Most Active
Active Physical
Physical Volumes
Volumes
------------------------------------------------------------------------
------------------------------------------------------------------------
util
util #rblk
#rblk #wblk
#wblk KB/s
KB/s volume
volume description
description
------------------------------------------------------------------------
------------------------------------------------------------------------
0.03
0.03 205016
205016 00 3306.1
3306.1 /dev/hdisk2
/dev/hdisk2 N/A
N/A
0.00
0.00 00 40
40 0.6
0.6 /dev/hdisk0
/dev/hdisk0 N/A
N/A
Notes:
The filemon command monitors I/O operations on logical volumes. I/O statistics are kept
on a per-logical-volume basis. The logical volume with the highest utilization is at the top,
and the others are listed in descending order.
The filemon command monitors I/O operations on physical volumes. At this level, physical
resource utilizations are obtained. I/O statistics are kept on a per-physical-volume basis.
The disks are presented in descending order of utilization. The disk with the highest
utilization is shown first.
Uempty
FILE:
FILE: /lv1fs/bigfile1
/lv1fs/bigfile1 volume:
volume: /dev/lv1
/dev/lv1 (/lv1fs)
(/lv1fs) inode:
inode: 21
21
opens:
opens: 11
total
total bytes
bytes xfrd:
xfrd: 105906176
105906176
reads:
reads: 101
101 (0
(0 errs)
errs)
read
read sizes
sizes (bytes):
(bytes): avg
avg 1048576.0
1048576.0 min
min 1048576
1048576 max
max 1048576
1048576 sdev
sdev 0.0
0.0
read
read times
times (msec):
(msec): avg
avg 10.154
10.154 min
min 0.002
0.002 max
max 17.055
17.055 sdev
sdev 2.217
2.217
FILE:
FILE: /dev/null
/dev/null
opens:
opens: 11
total
total bytes
bytes xfrd:
xfrd: 104857600
104857600
writes:
writes: 100
100 (0
(0 errs)
errs)
write
write sizes
sizes (bytes):
(bytes): avg
avg 1048576.0
1048576.0 min
min 1048576
1048576 max
max 1048576
1048576 sdev
sdev 0.0
0.0
write
write times
times (msec):
(msec): avg
avg 0.003
0.003 min
min 0.003
0.003 max
max 0.005
0.005 sdev
sdev 0.000
0.000
Notes:
The Detailed File Stats report is based on the activity on the interface between the
application and the file system. As such, the number of calls and the size of the reads or
writes reflects the application calls. The read sizes and write sizes will give you an idea of
how efficiently your application is reading and/or writing information.
In this example, the report shows the average read size is approximately 1 MB, which
matches the block size specified on the dd command on the previous visual.
The size used by an application has performance implications. For sequentially reading of
a large file, a larger read size will result in fewer read requests and thus lower CPU
overhead to read the entire file. When specifying an applications read or write block size,
using values which are a multiple of the page size which is 4 KB is recommended.
VOLUME:
VOLUME: /dev/hdisk2
/dev/hdisk2 description:
description: N/A
N/A
reads:
reads: 3242
3242 (0
(0 errs)
errs)
read
read sizes
sizes (blks):
(blks): avg
avg 63.2
63.2 min
min 88 max
max 64
64 sdev
sdev 6.0
6.0
read
read times
times (msec):
(msec): avg
avg 0.548
0.548 min
min 0.133
0.133 max
max 5.444
5.444 sdev
sdev 0.507
0.507
read
read sequences:
sequences: 65
65
read
read seq.
seq. lengths:
lengths: avg
avg 3154.1
3154.1 min
min 88 max
max 8192
8192 sdev
sdev 2972.6
2972.6
seeks:
seeks: 65
65 (2.0%)
(2.0%)
seek
seek dist
dist (blks):
(blks): init
init 28841008,
28841008,
avg
avg 166391.1
166391.1 min
min 512
512 max
max 581832
581832 sdev
sdev 119184.3
119184.3
seek
seek dist
dist (%tot
(%tot blks):init
blks):init 20.11593,
20.11593,
avg
avg 0.11605
0.11605 min
min 0.00036
0.00036 max
max 0.40581
0.40581 sdev
sdev 0.08313
0.08313
time
time to
to next
next req(msec):
req(msec): avg
avg 7.208
7.208 min
min 0.058
0.058 max
max 22246.806
22246.806 sdev
sdev 391.436
391.436
throughput:
throughput: 3306.1
3306.1 KB/sec
KB/sec
utilization:
utilization: 0.03
0.03
Notes:
As contrasted with the Detailed File States report, the Detailed Physical Volume Stats
report shows the activity at disk device driver. This report shows the actual number and
size of the reads and writes to the disk device driver. The file system uses VMM caching.
The default unit of work in VMM is the 4 KB page. But, rather than writing or reading one
page at a time, the file system tends to group work together to read or write multiple pages
at a time. This grouping of work can be seen in the physical volume read and write sizes
provided in this report.
Note that the sizes are expressed in blocks, where a block is the traditional Unix block size
of 512 bytes. To translate the sizes to KBs, divide the number by 2.
Uempty
System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 ent=0.30
ent=0.30 paths=27
paths=27 vdisks=7
vdisks=7 tapes=0
tapes=0
tty:
tty: tin
tin tout
tout avg-cpu:
avg-cpu: %% user
user %% sys
sys %% idle
idle %% iowait
iowait physc
physc %% entc
entc
0.0
0.0 4.5
4.5 2.8
2.8 2.7 2.7 90.990.9 3.6
3.6 0.00.0 5.9
5.9
Adapter:
Adapter: Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
fcs0
fcs0 24.5
24.5 2.9
2.9 30177888
30177888 12054768
12054768
Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk5
hdisk5 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk1
hdisk1 0.0
0.0 5.7
5.7 1.4
1.4 9562224
9562224 195492
195492
...
...
Adapter:
Adapter: Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
fcs1
fcs1 10.3
10.3 3.3
3.3 15298408
15298408 2454535
2454535
Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk5
hdisk5 0.0
0.0 1.4
1.4 2.8
2.8 00 2408071
2408071
hdisk1
hdisk1 0.0
0.0 0.0
0.0 0.0
0.0 00 00
...
...
Vadapter:
Vadapter: Kbps
Kbps tps
tps bkread
bkread bkwrtn
bkwrtn partition-id
partition-id
vscsi0
vscsi0 318.4
318.4 53.7
53.7 35.8
35.8 17.9
17.9 11
Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk0
hdisk0 3.9
3.9 274.4
274.4 51.9
51.9 256939768
256939768 215286684
215286684
hdisk12
hdisk12 0.0
0.0 0.5
0.5 0.1
0.1 834861
834861 4008
4008
...
...
Notes:
The -a option to iostat will combine the disks statistics to the adapter to which they are
connected. The adapter throughput will simply be the sum of the throughput of each of its
connected devices. With the -a option, the adapter will be listed first, followed by its
devices and then followed by the next adapter, followed by its devices, and so on. The
adapter throughput values can be used to determine if any particular adapter is
approaching its maximum bandwidth or to see if the I/O is balanced across adapters.
System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 ent=0.30
ent=0.30 paths=27
paths=27 vdisks=7
vdisks=7
tty:
tty: tin
tin tout
tout avg-cpu:
avg-cpu: %% user
user %% sys
sys %% idle
idle %% iowait
iowait physc
physc %% entc
entc
0.0
0.0 4.5
4.5 2.8
2.8 2.72.7 90.9
90.9 3.6
3.6 0.0
0.0 5.9
5.9
System:
System: chandler231.beaverton.ibm.com
chandler231.beaverton.ibm.com
Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
Physical
Physical 309.9
309.9 58.3
58.3 303382746
303382746 229980398
229980398
Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk0
hdisk0 3.9
3.9 274.4
274.4 51.9
51.9 257015587
257015587 215314441
215314441
hdisk5
hdisk5 0.0
0.0 1.4
1.4 2.8
2.8 00 2408071
2408071
hdisk1
hdisk1 0.0
0.0 5.7
5.7 1.4
1.4 9562224
9562224 195512
195512
hdisk7
hdisk7 0.0
0.0 8.3
8.3 0.9
0.9 10940729
10940729 3377556
3377556
hdisk2
hdisk2 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk11
hdisk11 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk10
hdisk10 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk8
hdisk8 0.0
0.0 0.1
0.1 0.0
0.0 56002
56002 152538
152538
hdisk9
hdisk9 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk3
hdisk3 0.0
0.0 0.0
0.0 0.0
0.0 00 00
hdisk4
hdisk4 0.0
0.0 8.9
8.9 0.5
0.5 15298408
15298408 46484
46484
hdisk6
hdisk6 0.0
0.0 10.5
10.5 0.7
0.7 9674935
9674935 8481768
8481768
hdisk12
hdisk12 0.0
0.0 0.5
0.5 0.1
0.1 834861
834861 4028
4028
Notes:
The -s option to iostat shows the system throughput. This is the sum of all the
adapters throughputs.
Uempty
real
real 0m6.92s
0m6.92s
user
user 0m0.01s
0m0.01s
sys
sys 0m0.36s
0m0.36s
real
real 0m16.52s
0m16.52s
user
user 0m0.01s
0m0.01s
sys
sys 0m0.61s
0m0.61s
Figure 5-22. Testing Sequential Throughput with time and dd. QV5721.0
Notes:
The time command prints the elapsed time during the execution of a command, time in the
system, and execution time of the time command in seconds.
The dd command reads the if parameter, does the specified conversions, then copies the
converted data to the of parameter. The input and output block size can be specified to
take advantage of raw physical I/O.
You can calculate the throughput by dividing the amount of data by the real time.
USE CAUTION when writing to an entire disk. It will destroy anything on the disk and may
make it unusable.
Notes:
The ndisk tool (part of the nstress package available on the internet at
http://www.ibm.com/developerworks/wikis/display/WikiPtype/nstress) can be used to test
the throughput and stress the disk subsystem to see what it can handle.
Uempty
Notes:
This example tests the read throughput for random I/O. The flags used are:
-f <file> use <file> for disk I/O (can be a file or raw device)
-R Random disk I/O test (file or raw device)
-r <read%> Read percent min=0,max=100
-b <size> Block size, use with K, M or G (default 4KB)
-t <secs> Timed duration of the test in seconds (default 5)
-s <size> File size, use with K, M or G (mandatory for raw device)
-M <num> Multiple processes used to generate I/O
Notes:
Be careful increasing the queue_depth values. You can overload the disk subsystem or
cause device configuration problems at boot. An increase in queue depths allows more
I/Os to be sent to the disk subsystem. This will probably cause I/O service times to
increase, but throughput will also increase. If I/O service times start approaching the disk
timeout value, then you're submitting more I/Os than the disk subsystem can handle. You
will see I/O timeouts and errors in the error log indicating problems completing I/Os.
When testing for the appropriate value for the queue depths, its best to have the actual
application(s) running. When that is not possible, use a tool like ndisk in the nstress
package.
Read and write caches affect your I/O service times and testing results. If the read cache
already has the data from an earlier test run, the I/O service times will be faster and will
affect repeatability of the results. Write cache helps performance until, and if, the write
caches fill up at which time performance goes down, so longer running tests with high write
rates can show a drop in performance over time.
Uempty
Checkpoint
1. True/False: I/Os are queued at several layers in the I/O
stack.
2. True/False: The queue_depth can be changed while the
volume group is varied on.
3. True/False: If the queue statistics in iostat D are all zero,
the number of I/O operations are not being limited by
queue_depth. so there is no need to adjust queue_depth.
4. True/False: When a physical disk is used as backing storage
on a VIOS, make the virtual SCSI queue_depth equal to
the physical volume queue_depth.
5. True/False: The parameter lg_term_dma specifies the
queue size for the adapter.
Notes:
Notes:
Uempty
Unit Summary
I/Os are queued to improve throughput.
I/Os are queued at several layers in the I/O stack.
The disk drive queue depth is the maximum number of requests the
disk can hold in its queue.
The iostat D command shows an extended disk utilization report,
including disk queue statistics.
Adjusting queue depth may improve disk performance, but it is
dependent on the workload.
The fcstat command displays statistics gathered by a specified
Fibre Channel device, and can show when num_cmd_elems and/or
max_xfer_size need to be increased.
Be careful increasing the queue_depth values. You can overload
the disk subsystem or cause device configuration problems at boot.
Notes:
References
http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com
.ibm.aix.doc/doc/base/aixinformation.htm
Welcome to the AIX 7.1 Information Center
SG24-6050 Practical Guide for SAN with pSeries Redbook
SC23-6730 AIX Operating System and Device Management
Unit Objectives
After completing this unit, you should be able to:
Describe the FC problem determination process
Identify the FC path selection algorithms
Use the iostat m command to display path priorities
List the attributes that are used for FC health checking
Define the FC reservation policies that can be used
Explain the purpose of fast I/O failure and dynamic tracking
Notes:
Uempty
Notes:
Define what aspect or portion of the FC SAN is not working correctly. What action, function,
or application does not work as intended?
If it is a new FC SAN installation, then configuration issues are typically the cause of the
problem. Otherwise, some form of connectivity issue is likely.
Problems in an established FC SAN are typically the result of a change to one or more
devices, or the addition or removal of a device.
Determine the scope of the problem. Is the problem observed on other servers? Can the
server connect with other storage devices? Are there any common devices within the
scope of the problem?
Constantly occurring problems are much easier to isolate. Intermittent issues typically
involve more sophisticated troubleshooting techniques.
Actions that improve a faulty situation are additional clues about the source of the problem.
A complete description of these actions can be a valuable resource of information for the
problem determination process.
Notes:
Effective problem determination starts with a good understanding of the SAN and the
components involved.
SAN diagrams should document both the logical and the physical connections across the
entire SAN.
Many SAN devices, particularly SAN fabric switches, have the means to download, and
upload, configurations to a server over a LAN connection.
Many problems are introduced and/or resolved with different levels of code and PTFs on
specific devices.
Performance-related issues need to have a reference to establish the degree of impact of
the problem.
With data being collected from a multitude of sources, it is important to be able to correlate
the times of the different pieces of information.
User manuals (installation, configuration, and service guides) for each type of device within
the SAN provide points of reference for commands.
Uempty
Notes:
Usually, problems in the SAN environment are observed by end users.
A number of SAN products provide indications that help quickly pinpoint a hardware
problem.
Typically, console commands, along with using a products GUI management tool, are the
most useful tools for troubleshooting.
Many SAN devices maintain an internal error log that can be viewed as a powerful
resource during problem determination.
Some SAN devices are capable of generating internal traces of events and certain types of
Fibre Channel traffic.
Fibre Channel protocol analyzers and trace tools are very expensive. Fortunately, many
problems can be resolved without these tools. For performance issues, intermittent
problems, and connectivity faults, these are the preferred tools.
Many FC SAN management applications communicate over a LAN connection and, thus,
have no impact on Fibre Channel traffic.
Problem Analysis
Define the problem to be resolved
Specify the issue
Determine potential causes (distinctions and/or changes)
Test each cause against the specifications
Find the most probable cause
Verify, observe, experiment, fix and monitor
Notes:
Uempty
## lspath
lspath -l
-l hdisk4
hdisk4 -F'status
-F'status path_id
path_id parent
parent connection'
connection'
Enabled
Enabled 00 fscsi0
fscsi0 500507630e01fc30,4011405200000000
500507630e01fc30,4011405200000000
Enabled
Enabled 11 fscsi0
fscsi0 500507630e03fc30,4011405200000000
500507630e03fc30,4011405200000000
Enabled
Enabled 22 fscsi1
fscsi1 500507630e81fc30,4011405200000000
500507630e81fc30,4011405200000000
Enabled
Enabled 33 fscsi1
fscsi1 500507630e83fc30,4011405200000000
500507630e83fc30,4011405200000000
## lspath
lspath -AHE
-AHE -l
-l hdisk4
hdisk4 -p
-p fscsi0
fscsi0 -i
-i 00
attribute
attribute value
value description
description user_settable
user_settable
scsi_id
scsi_id 0x40a00
0x40a00 SCSI
SCSI ID
ID False
False
node_name
node_name 0x500507630efffc30
0x500507630efffc30 FC
FC Node
Node Name
Name False
False
priority
priority 11 Priority
Priority True
True
Notes:
Notes:
The path selection policies are:
fail_over - all I/O operations for the device are sent to the same (preferred) path until
the path fails because of I/O errors. Then an alternate path is chosen for subsequent
I/O operations.
round_robin - the path to use for each I/O operation is chosen at random from those
paths that were not used for the last I/O operation.
load_balance - the path to use for an I/O operation is chosen by estimating the load on
the adapter to which each path is attached. The load is a function of the number of I/O
operations currently in process. If multiple paths have the same load, a path is chosen
at random from those paths. Load-balancing mode also incorporates failover protection.
Dynamic load balancing between multiple paths when there is more than one path from
a host server to the DS. This may eliminate I/O bottlenecks that occur when many I/O
operations are directed to common devices via the same I/O path, thus improving the
I/O performance.
Uempty
Path Priority
The default path priority is 1 (for each path)
## lspath
lspath -AHE
-AHE -l
-l hdisk4
hdisk4 -p
-p fscsi0
fscsi0 -i
-i 00
attribute value
attribute value description
description user_settable
user_settable
scsi_id
scsi_id 0x40a00
0x40a00 SCSI
SCSI ID
ID False
False
node_name
node_name 0x500507630efffc30
0x500507630efffc30 FC
FC Node
Node Name
Name False
False
priority
priority 11 Priority
Priority True
True
## chpath
chpath -l
-l hdisk4
hdisk4 -i
-i 33 -a
-a priority=2
priority=2
path
path Changed
Changed
Notes:
Path priority modifies the behavior of the algorithm methodology on the list of paths. When
the algorithm attribute value is fail_over, the paths are kept in a list. The sequence in this
list determines which path is selected first and, if a path fails, which path is selected next.
The sequence is determined by the value of the path priority attribute. A priority of 1 is
the highest priority. Multiple paths can have the same priority value, but if all paths have the
same value, selection is based on when each path was configured.
When the algorithm attribute value is round_robin, the sequence is determined by percent
of I/O. The path priority value determines the percentage of the I/O that should be
processed down each path. I/O is distributed across the enabled paths. A path is selected
until it meets its required percentage. The algorithm then marks that path failed or disabled
to keep the distribution of I/O requests based on the path priority value.
## iostat
iostat -dm
-dm hdisk4
hdisk4 33 11
System
System configuration:
configuration: lcpu=4
lcpu=4 drives=13
drives=13 paths=27
paths=27 vdisks=7
vdisks=7
Disks:
Disks: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
hdisk4
hdisk4 80.0
80.0 3385.4
3385.4 6771.1
6771.1 10080
10080 00
Paths:
Paths: %% tm_act
tm_act Kbps
Kbps tps
tps Kb_read
Kb_read Kb_wrtn
Kb_wrtn
Path3
Path3 0.0
0.0 0.0
0.0 0.0
0.0 00 00
Path2
Path2 77.0
77.0 3385.4
3385.4 6771.1
6771.1 10080
10080 00
Path1
Path1 0.0
0.0 0.0
0.0 0.0
0.0 00 00
Path0
Path0 0.0
0.0 0.0
0.0 0.0
0.0 00 00
Notes:
Uempty
Health Checking
AIX PCM has a health check capability that can be used to:
Check the paths and determine which ones can currently be used to send I/O
Enable a path that was previously marked failed because of a temporary path
fault
Check currently unused paths that would be used if a failover occurred
hcheck_mode attribute
hcheck_interval attribute
Defines how often the health check is performed on the paths for a device.
# chdev -l hdisk0 -a hcheck_interval=60 P
Notes:
Health checking (hcheck_mode) supports the following modes of operations:
nonactive: When this value is selected, the healthcheck command will be sent to
paths that have no active I/O, including paths that are opened or in failed state, which is
the default setting for MPIO devices
enabled: When this value is selected, the healthcheck command will be sent to paths
that are opened with a normal path mode
failed: When this value is selected, the healthcheck command is sent to paths that
are in failed state
The hcheck_interval attribute defines how often the health check is performed on the
paths for a device. The attribute supports a range from 0 to 3600 seconds. When a value of
0 is selected, health checking is disabled.
Notes:
After a path failure, the path shows up in the failed mode even after the path is up again.
Unless the hcheck_mode and hcheck_interval attributes are set, the state will continue to
show Failed even after the disk has recovered. To have the state updated automatically,
type chdev -l hdiskx -a hcheck_interval=60.
Uempty
Disk Reservation
Purpose: Prevent data corruption in a multihost environment
Occurs on SCSI command level with all types of drives
varyonvg (without any options) places SCSI reserve on disks/LUNs in volume
group
varyoffvg releases the reserve
Notes:
Reservation policies:
no_reserve does not apply a reservation methodology for the device. The device might
be accessed by other initiators, and these initiators might be on other host systems
single-path policy applies a SCSI2 reserve methodology for the device, which means
the device can be accessed only by the initiator that issued the reserve.
PR_exclusive policy applies a SCSI3 persistent-reserve, exclusive-host methodology
when the device is opened.
PR_shared policy applies a SCSI3 persistent-reserve, shared-host methodology when
the device is opened.
Notes:
Refer to the storage vendors documentation on how to clear or break a reservation.
Another reference is the IBM System Storage Multipath Subsystem Device Driver Users
Guide (SC30-4131).
Uempty
## lsattr
lsattr -El
-El fscsi0
fscsi0
attach
attach switch
switch How
How this
this adapter
adapter is
is CONNECTED
CONNECTED False
False
dyntrk
dyntrk yes
yes Dynamic
Dynamic Tracking
Tracking of
of FC
FC Devices
Devices True
True
fc_err_recov
fc_err_recov fast_fail
fast_fail FC
FC Fabric
Fabric Event
Event Error
Error RECOVERY
RECOVERY Policy
Policy True
True
scsi_id
scsi_id 0x30100
0x30100 Adapter
Adapter SCSI
SCSI ID
ID False
False
sw_fc_class
sw_fc_class 33 FC
FC Class
Class for
for Fabric
Fabric True
True
Notes:
To change
# chdev -l fscsi0 -a fc_err_recov=fast_fail
Notes:
AIX supports fast I/O failure for fibre channel devices after link events in a switched
environment. Fast failure of I/O is controlled by the fscsi device attribute, fc_err_recov.
fast_fail - If the fibre channel adapter driver detects that a link was lost between the
storage device and the switch, it waits a about 15 seconds to allow the fabric to stabilize. At
that point, if the FC adapter driver detects that the device is not on the fabric, it begins
failing all I/Os at the adapter driver. Any new I/O or future retries of the failed I/Os are failed
immediately by the adapter until the adapter driver detects that the device has rejoined the
fabric. Fast I/O failure can be useful in multipath configurations. It can decrease the I/O fail
times due to the loss of a link between the storage device and the switch, and can allow
faster failover to alternate paths.
delayed_fail - I/O failure proceeds as normal; retries are not immediately failed, and
failover takes longer than it does if fast_fail is specified.
Uempty
Notes:
You can dynamically track fibre channel devices, which allows the dynamic movement of a
fibre channel path between the fabric switch and the storage subsystem by suspending I/O
for 15 seconds while the move occurs.
If dynamic tracking of FC devices is enabled, the FC adapter driver detects when the Fibre
Channel node port ID of a device changes. The FC adapter driver then reroutes traffic
destined for that device to the new Worldwide Port Name (WWPN) while the devices are
still online.
If dynamic tracking is not enabled, you must take the devices offline before you move a
cable from one port to another. Otherwise, failover occurs.
You can use dynamic tracking only in a SAN environment. You cannot use it in a
direct-attach environment.
FCP_ERR10 Error
The FCP_ERR10 error indicates that dynamic tracking or fast fail is enabled, but
the adapter firmware or the SAN configuration does not support these features
LABEL:
LABEL: FCP_ERR10
FCP_ERR10
IDENTIFIER:
IDENTIFIER: 5A7598C3
5A7598C3
Date/Time:
Date/Time: Fri
Fri May
May 66 22:14:22
22:14:22 CDT
CDT 2011
2011
Sequence
Sequence Number:
Number: 1644
1644
Machine
Machine Id:
Id: 0006D0A2D900
0006D0A2D900
Node
Node Id:
Id: usap03
usap03
Class:
Class: OO
Type:
Type: INFO
INFO
WPAR:
WPAR: Global
Global
Resource
Resource Name:
Name: fscsi7
fscsi7
Description
Description
Additional
Additional FCFC SCSI
SCSI Protocol
Protocol Driver
Driver Information
Information
Recommended
Recommended Actions
Actions
WAIT
WAIT FOR
FOR ADDITIONAL
ADDITIONAL MESSAGE
MESSAGE BEFORE
BEFORE TAKING
TAKING ACTION
ACTION
Detail
Detail Data
Data
SENSE
SENSE DATA
DATA
0000
0000 0010
0010 0000
0000 00D9
00D9 0000
0000 0000
0000 0302
0302 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0001
0001 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000 0000
0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
...
...
Notes:
FCP_ERR10 indicates that dynamic tracking or fast fail is enabled, but the adapter
firmware or the SAN configuration does not support these features.
The current settings on the system:
# lsattr -El fscsi7
attach al How this adapter is CONNECTED False
dyntrk yes Dynamic Tracking of FC Devices True
fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True
The solution is to disable dynamic tracking and fast fail
# chdev -l fscsi7 -attr fc_err_recov=delayed_fail dyntrk=no
The changes will require a system reboot.
Uempty
Checkpoint
1. What disk attribute determines which path is chosen?
_______________________________________
2. What is the default MPIO path priority value? ______
3. Which parameter needs to be changed for the MPIO
paths to be automatically updated when a path shuts
down and comes back? __________________________
4. What is the purpose of disk reservation?
____________________________________________
5. True/False: Fast fail and dynamic recovery only works in
a switch attached Fibre Channel.
Notes:
Notes:
Uempty
Unit Summary
Troubleshooting starts with an accurate description of the problem.
The algorithms that define which path is chosen may include
fail_over, round_robin and load_balance. Which are
available depend on the type of disk.
Path priority modifies the behavior of the algorithm methodology on
the list of paths.
The iostat m flag displays path information for disks.
The health check capability checks the paths to determine which
ones can be used to send I/O.
The reserve_policy can be used to allow one or any number of
initiators to access the disk.
The fast failure attribute fc_err_recov determines whether or not
to wait a period of time to fail I/Os once a link is lost
FC devices can be dynamically tracked, allowing the dynamic
movement of a FC path between the fabric switch and the storage
subsystem
UNIX Software Service Enablement Copyright IBM Corporation 2011
Notes:
Checkpoint Solutions
1. IBM offers a network storage solution via the N Series
product family.
2. The storage servers in our lab support the following top
speeds:
DS4300 2 Gb
DS6800 2 Gb
n3400 4 Gb
3. To find a LUN value, use the lscfg command .
4. A quick way to show device information is via the
lsdev command.
5. The lsattr command can be used to identify device
attributes.
Unit 2
Checkpoint Solutions
1. What two ways can AIX be installed on a SAN disk?
Installing directly to the SAN disk
Copying an existing rootvg to a SAN disk
AP Unit 3
Checkpoint Solutions
1. The VIO server command cfgdev will configure attached
devices and make them available.
2. To list all configured disks under the VIO server, use the
command lsdev type disk.
3. True/False: To use NPIV, you must use an 8 Gb Fibre
Channel HBA assigned to the VIO server.
4. When you use NPIV, the virtual Fibre Channel HBA is
provided a unique WWPN from the Hypervisor
5. To view available NPIV ports, use the lsnports
command
Unit 4
Checkpoint Solutions
1. True/False: You need to install an expansion card for Fibre
Channel connectivity to a Blade Server.
2. What two I/O modules options are available to provide
Fibre Channel access to a Blade Server?
Switch
Pass-thru
3. Under IVM, what LPAR owns the physical adapter when
creating virtual Fibre Channel devices? 1
4. True/False: You can see virtual disks (LUNs) that are
assigned to a LPAR via the IVM GUI.
False You can only see the virtual FCS device.
AP Unit 5
Checkpoint Solutions
1.
1. True/False:
True/False: I/Os
I/Os are
are queued
queued at
at several
several layers
layers in
in the
the I/O
I/O
stack.
stack.
2.
2. True/False:
True/False: The queue_depth can
The queue_depth can be
be changed
changed while
while the
the
volume
volume group
group is
is varied
varied on.
on.
3. True/False: IfIf the
3. True/False: the queue
queue statistics
statistics in
in iostat
iostat D D are
are all
all zero,
zero,
the
the number
number ofof I/OI/O operations
operations are are not
not being
being limited
limited by
by
queue_depth. so
queue_depth. so there
there is
is no
no need
need toto adjust
adjust queue_depth.
queue_depth.
4. True/False: When
4. True/False: When aa physical
physical disk
disk is
is used
used as
as backing
backing storage
storage
on
on aa VIOS,
VIOS, make
make the
the virtual
virtual SCSI queue_depth equal
SCSI queue_depth equal to
to
the
the physical
physical volume
volume queue_depth.
queue_depth.
5. True/False: The
5. True/False: The parameter lg_term_dma specifies
parameter lg_term_dma specifies the
the
queue
queue sizesize for
for the
the adapter.
adapter.
Its
Its typically
typically called
called num_cmd_elems.
num_cmd_elems.
UNIX Software Service Enablement Copyright IBM Corporation 2011
Unit 6
Checkpoint Solutions
1. What disk attribute determines which path is chosen?
algorithm
2. What is the default MPIO path priority value? 1
3. Which parameter needs to be changed for the MPIO
paths to be automatically updated when a path shuts
down and comes back?
hcheck_interval needs to be non-zero
4. What is the purpose of disk reservation?
To prevent access from another host to the same disk.
5. True/False: Fast fail and dynamic recovery only works in
a switch attached Fibre Channel.
backpg
Back page