You are on page 1of 400

'symantec..

VERITAS Storage Foundation


5.0 for UNIX: Fundamentals

100002353A

COURSE

DEVELOPERS

Gail Ade~

AND

REVIEWERS

Jade ArrinJ,(lolI
:\Iargy

2006 Symamec Corporation.

All rights reserved. Symantcc.

the Symanrec Logo. and "LRITAS arc trademarks or registered


trademarks uf 5) mantee Corporation Of its alfiluues in the U.S. and other
countries. Other names may be trademarks of their respective owners.

BilJ,(e Gerrits

lECHNICAL
CONTRIBUTORS

Copyright'

Cassid~

Ro~' Freeman
Joe Gallagher

-I IllS PUBLICATION IS I'ROVIDfD "AS IS" AND ALL EXPRESS OR


IMPLllDCONIlITIONS.
REPRESENTArJONS AND WARRANTIES.
INCLUDIN(i ANY 1\1PLlUl WARRANTY OF MFRCHANTA81L1TY.
IITNI.sS FOR A PARIICULAR PURPOSE OR NONINFRIN(iI:MEN r. ARL DISCLAIi\IED. EXCEl'! TO THE FXTEN!
rHAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY
INVALID. SY\IANTI:C CORPORATION SHAl.L NOT HE L1ABLI:

Bruce Garner

lOR INCIDENTAL OR CONSEQULNTIAL DA\1AGI-.S IN


CONNECTION WITH (HI FURNISHING. PERIC)RMANCE. OR USE

Tomer G urantz

OF THIS PUIlLlCAIION.

Bill Havev

H!:RUN

Geue Henriksen
Gerald Jackson
Haymond

TH~. INFOR\IATION

ro CHANtiE

WITHOUT

CONTAINLD
NOTICE.

No part orthe contents ofthis hook may be reproduced or transmitted in


any torm or b) any means without the ,\ riuen permission of the publisher.

Karns

Bill Lehman
Boh l.ucas
Durivunc

Manikhung

Chr'istlan

Rahanus

Dan Rugers
Kleher Saldanha
Albrecht Scriba
"liehe!

IS SUBJECT

Simoni

Anaudu Sirisena
Pete 'Iuemmes

tLRIT-l.')' ::';/orugc FOflllllulion


Symnnrec Corporation
~0330 5te\ ens Creek 81\ U.
Cupertino. CA ()SOI4

5.0 /iw [,i;V/.\': Fundamentals

Table of Contents
Course Introduction
What Is Storage Virtualization?.
Introducing VERITAS Storage Foundation
VERITAS Storage Foundation Curriculum ..
Lesson

1: Virtual

Intro-2
Intro-6
Intro-11

Objects

Physical Data Storage


Virtual Data Storage

1-3
1-10

Volume Manager Storage Objects

1-13

Volume Manager RAID Levels ..

1-15

Lesson

2: Installation

and Interfaces

Installation Prerequisites
.
Adding License Keys..
.
VERITAS Software Packages ..
Installing Storage Foundation.
Storage Foundation User Interfaces.
Managing the VEA Software
Lesson

3: Creating

2-3
.... 2-5
. 2-7
2-10
2-16
2-21

a Volume and File System


...... 3-3

Preparing Disks and Disk Groups for Volume Creation


Creating a Volume

3-12

318
3-21
3-24
3-30

Adding a File System to a Volume


Displaying Volume Configuration Information ...
Displaying Disk and Disk Group Information
Removing Volumes, Disks, and Disk Groups
Lesson 4: Selecting

Volume

Layouts

Comparing Volume Layouts .....


Creating Volumes with Various Layouts.
Creating a Layered Volume
Allocating Storage for Volumes
Lesson

5: Making Basic Configuration

.
.
.

Changes

Administering Mirrored Volumes


Resizing a Volume
.
Moving Data Between Systems
Renaming Disks and Disk Groups
Managing Old Disk Group Versions ..

Table of Contents
Copyrigtlt

,( 2006

4-3
4-9
4-18
.425

Svmantec

Corporation

All rights

reserved

5-3
5-10
5-16
5-21
5-23

Lesson

6: Administering

Comparing

Using VERITAS
Controlling

File Systems

the Allocation

Policies of VxFS and Traditional

File System Fragmentation

6-9

7: Resolving

,
Hardware

6-15

Problems

How Does VxVM Interpret Failures in Hardware


Recovering Disabled Disk Groups ""
Resolving Disk Failures
Managing Hot Relocation at the Host Level
Appendix

,...

7-3
7-8
7-12
7-22

,..,..,

A-3
A-7

,
,..,

A: Lab Exercises

Lab 1: Introducing the Lab Environment


Lab 2: Installation and Interfaces

Lab 3: Creating a Volume and File System

,..,..,

Lab 4: Selecting Volume Layouts


Lab 5: Making Basic Configuration

,
Changes

Lab 6: Administering File Systems......


Lab 7: Resolving Hardware Problems..
Appendix

6-3
6-5

Logging in VxFS
Lesson

File Systems

File System Commands

A-21
,

..

,..,
,..,

....,

A-15
A-29
A-37
A-47

B: Lab Solutions

Lab 1 Solutions: Introducing the Lab Environment


Lab 2 Solutions: Installation and Interfaces
Lab 3 Solutions: Creating a Volume and File System

,..,
,

,
,

Lab 4 Solutions: Selecting Volume Layouts


Lab 5 Solutions: Making Basic Configuration
Lab 6 Solutions: Administering
Lab 7 Solutions: Resolving

,..,
Changes

File Systems

Hardware Problems

"

"......................

,
,..,

B-3
B-7
B-21
B-33

,.."

B-47

"

B-67
B-85

Glossary
Index

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyngt1t

<.;; .. LUOb Symaruec

Corporation

All nqhts

reserved.

Course Introduction

symantec

Storage Management Issues


Human Resource
Database

10%

E-mail
Server

50% Full

I'
r ,

Customer Order
Database

90% F
Problem: Customer
order database cannot
access unutilized
storage.

Multiple-vendor hardware
Explosive data growth
Different application needs
Management pressure to
increase efficiency
Multiple operating systems
Rapid change
Budgetary constraints

Common solution: Add


more storage.

What Is Storage Virtualization?


Storage Management Issues
Storage management is becoming

increasingly

Storage hardware from multiple


Unprecedented
Dissimilar

Multiple

data growth

applications

Management

complex due to:

vendors

with different

storage resource needs

pressure to increase efficiency

operating systems

Rapidly changing business climates


Budgetary and cost-control

constraints

To create a truly efficient environment. administrators must have the tools to


skillfully manage large, complex, and heterogeneous environments. Storage
virtualization helps businesses to simplify the complex IT storage environment
and gain control of capital and operating costs by providing
automated management of storage.

Intro-2

VERITAS

Storage

Copynyht r; 2006 Svmantec coroorauoo

Foundation

All ngllts reserved

consistent and

5.0 for UNIX. Fundamentals

,S}lnamcc.

What Is Storage Virtualization?


A

Virtualizatlon:
The logical
representation of
physical storage
across the entire
enterprise

Consumer

Consumer

sliildeM"

fIiIiiIrIIi"m''fl

.MimkM4
Consumer

m'

&

'f

Application requirements from storage


Application
requirements

Throughput
Responsiveness

Growth otential
Ca acit

Failure
resistance
Recovery time

Performance

Availabilit

Disk size

Disk seek time

MTBF

Number of disks!
path

Cache hit rate

Path
redundanc

Physical aspects of storage


Physical Storage Resources

Defining Storage Virtualization


Storage virtualization is the process of taking multiple physical storage devices
and combining them into logical (virtual) storage devices that are presented to the
operating system. applications. and users. Storage virtualization builds a layer of
abstraction above the physical storage so that data is not restricted to specific
hardware devices. creating a Ilexible storage environment. Storage virtualization
simplifies management of storage and potentially reduces cost through improved
hardware utilization and consolidation.
With storage virtualization. the physical aspects of storage arc masked to users.
Administrators can concentrate less on physical aspects of storage and more on
delivering access to necessary data.
Benefits of storage virtualization

include:

Greater IT productivity through the automation of manual tasks and simplified


administration of heterogeneous environments
Increased application return on investment through improved throughput and
increased uptime
Lower hardware costs through the optimized use of hardware resources

Course

Intro-3

Introduction
Copyright

t: 20U6

Symanter;

Corporation

All riqh'~

.eservco

syrnantec

Storage Virtualization:
Storage-Based

Types
Host-Based

JfIfIJ'AIiI'AY
Servers

Network-Based

AiIII1'

AYAYAY

Servers

Server

~j,~

Storage

Storage

~s.,",
Storage

Most companies use a combination of these three


types of storage virtualization to support their chosen
architectures and application requirements.

How Is Storage Virtualization Used in Your Environment?


The way in which you use storage virtuulization. and the benefits derived from
storage virtualization. depend on the nature of your IT infrastructure and your
specific application requirements. Three main types of storage virtualization used
today arc:
Storage-based
Host-based
Network-based
Most companies use a combination of these three types of storage virtualization
solutions to support their chosen architecture and application needs.
The type of storage virtualization

that you use depends on factors. such as the:

Heterogeneity of deployed enterprise storage arrays


Need for applications to access data contained in multiple storage devices
Importance of uptime when replacing or upgrading storage
Need for multiple hosts to access data within a single storage device
Value of the maturity of technology
Investments in a SAN architecture
Level of security required
Level of scalability needed

VERITAS

Inlro-4
Copynghl

,C ;':OOb Svmantec

Storage Foundation
Conioranon

AU flghls

reserved

5.0 for UNIX: Fundamentals

Storage-Based Storage Virtualization


Storage-basedstorage virtualization refers to disks within an individual array that
are presented virtually to multiple servers. Storage is virtualized by the array itself
For example. RAID arrays virtualize the individual disks (that are contained within
the array) into logical LUNS. which are accessed by host operating systems using
the same method of addressing as a directly-attached physical disk.
This type of storage virtualization is useful under these conditions:
You need to have data in an array accessible to servers of di fferent operating
systems.
All of a server's data needs are met by storage contained in the physical box.
You are not concerned about disruption to data access when replacing or
upgrading the storage.
The main limitation to this type of storage virtualization is that data cannot be
shared between arrays. creating islands of storage that must be managed.
Host-Based Storage Virtualization
Host-basedstorage virtualization refers to disks within multiple arrays and from
multiple vendors that are presented virtually to a single host server. For example.
software-based solutions. such as VERITAS Storage Foundation. provide hostbased storage virtualizarion. Using VERlTAS Storage Foundation to administer
host-based storage virtualization is the focus of this training.
I lost-based storage virtualization is useful under these conditions:
A server needs to access data stored in multiple storage devices.
You need the flexibility

to access data stored in arrays from different vendors.

Additional servers do not need to access the data assigned to a particular host.
Maturity of technology is a highly important factor to you in making IT
decisions.
Note: By combining VERITAS Storage Foundation with clustering technologies.
such as VERITAS Cluster Volume Manager. storage can be virtualized to multiple
hosts ofthe same operating system.
Network-Based Storage Virtualization
Network-basedstorage virtualization refers to disks from multiple arrays and
multiple vendors that arc presented virtually to multiple servers. Network-based
storage virtualization is useful under these conditions:
You need 10 have data accessible across heterogeneous servers and storage
devices.
You require central administration of storage across all Network Attached
Storage (NAS) systems or Storage Area Network (SAN) devices.
You want to ensure that replacing or upgrading storage does not disrupt data
access.

You want to virtualize storage to provide block services to applications.

Course Introduction

Intro-5
Copyright ,~,2006 Svmantec Corporation. All nnhts reserved

syrnarucc

VERITAS Storage Foundation


VERIT AS Storage Foundation provides host-based
storage virtualization for performance, availability,
and manageability benefits for enterprise computing
environments.
Company Business Process
High Availability
Application Soluti ons
-<
Data Protection
Volume Manager
~
and File System

VERIT AS Cluster Server/Replication


Storage Foundation for Databases
VERITAS NetBackup/Backup Exec
VERIT AS Storage Foundation
Hardware and Operating System

Introducing

VERITAS Storage Foundation

VERITAS storage management solutions address the increasing costs of managing


mission-critical data and disk resources in Direct Attached Storage (DAS) and
Storage Area Network (SAN) environments.
Atthe heart of these solutions is VERITAS Storage Foundation, which includes
VERITAS Volume Manager (VxVM). VERITAS File System (VxFS), and other
value-added products. Independently, these components provide key benefits.
When used together as an integrated solution, VxVM and VxFS deliver the highest
possible levels of performance, availability, and manageability for heterogeneous
storage environments.

Intro-6

VERITAS
Cnpyllght , 2006 Syroantec

Storage
Corporation

Foundation
All r,ghts

reserved

5.0 for UNIX: Fundamentals

Users

\/ Applications\/

Databases \

....................................................................................................
VERITAS
Volume
Manager
(VxVM)

Virtual Storage Resources

00.0

00

.0..(?v!2!J.t.1

"

....,,\
\

'.',

What Is VERITAS Volume Manager?


VERITAS Volume Manager, the industry-leader in storage virtualizarion. is an
easy-to-use, online storage management solution for organizations that require
uninterrupted, consistent access to mission-critical data. VxVM enables you to
apply business policies to configure, share. and manage storage without worrying
about the physical limitations of disk storage. VxVM reduces the total cost of
ownership by enabling administrators to easily build storage configurations that
improve performance and increase data availability.
Working in conjunction with VERITAS File System, VERITAS Volume Manager
creates a foundation for other value-added technologies. such as SAN
environments, clustering and failover, automated management. backup and IISM,
and remote browser-based management.
What Is VERITAS File System?
A file system is a collection of directories organized into a structure that enables
you to locate and store tiles. All processed information is eventually stored in a tile
system. The main purposes of a file system arc to:
Provide shared access to data storage.
Provide structured access to data.
Control accessto data.
Provide a common. portable application interface.
Enable the manageability or data storage.
The value of a file system depends on its integrity and performance.

Intro-7

Course Introduction
Copyright

os 2006

swnante- Corporation. All rigllts reserveo

svrnaruec

VERITAS Storage Foundation:

Benefits

Manageability
- Manage storage and file systems from one interface.
- Configure storage online across Solaris, HP-UX, AIX, and
Linux.
- Provide additional benefits for array environments, such as
inter-array mirroring.

Availability
- Features are implemented to protect against data loss.
- Online operations lessen planned downtime.

Performance
- 1/0 throughput can be maximized using volume layouts.
- Performance bottlenecks can be located and eliminated
using analysis tools.

Scalability
- VxVM and VxFS run on 32-bit and 64-bit operating systems.
- Storage can be deported to larger enterprise platforms.

Benefits of VERITAS Storage Foundation


Commercial system availability now requires continuous uptime in many
implementations. Systems must be available 24 hours a day. 7 days a week, and
365 days a year. VERlTAS Storage Foundation reduces the cost ofownership by
providing scalable manageability, availability, and performance enhancements for
these enterprise computing environments.
Manageability
Management of storage and the tile system is performed online in real time,
eliminating the need for planned downtime.
Online volume and file system management can be performed through an
intuitive. easy-to-use graphical user interface that is integrated with the
VERITAS Volume Manager (VxVM) product.
Vx VM provides consistent management across Solaris. HP-llX,
and Windows platforms.

AlX, Linux,

VxFS command operations are consistent across Solaris, HP-UX, AlX, and
Linux platforms.
Storage Foundation provides additional benefits for array environments, such
as inter-array mirroring.
Availability
Through software RAID techniques, storage remains available in the event of
hardware fai lure.

Intro-8

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


CopytlQtll ''",2006 Svmantec Corporation All fights reserved

1I0t relocation guarantees the rebuilding of redundancy in the case of a disk


failure.
Recovery time is minimized with logging and background mirror
resynchronization.
Logging of file system changes enables fast file system recovery.
A snapshot of a file system provides an internally consistent. read-only image

for backup. and file system checkpoints provide read-writable snapshots.


Performance
I/O throughput can be maximized by measuring and modifying volume layouts
while storage remains online.
Performance bottlenecks can he located and eliminated using YxYM analysis
tools.
Extent-based allocation of space lor files minimizes file level access time.
Read-ahead buffering dynamically tunes itself to the volume layout.
Aggressive caching of writes greatly reduces the number of disk accesses.
Direct I/O performs file

[10

directly into and out of user butlers.

Scalability
YxYM runs over a 32-bit and M-hit operating system.
Ilosts can be replaced without modifying storage.
Hosts with different operating systems can access the same storage.
Storage devices can be spanned.
YxYM is fully integrated with YxFS so that modifying the volume layout
automatically modi lies the file system internals.
With YxFS. several add-on products are available for maximizing performance
in a database environment.

Intro-9

Course Introduction
Copyright;;; 2006 Symalll~r. Corporation All rights .esorveo

symantec

Storage Foundation and RAID Arrays: Benefits


With Storage Foundation, you can:
Reconfigure and resize storage
across the logical devices
presented by a RAID array.
Mirror between arrays to improve
disaster recovery protection of an
array.
Use arrays and JBODs.
Use snapshots with mirrors in
different locations for disaster
recovery and off-host processing.
Use VERITAS Volume Replicator
(VVR) to provide hardwareindependent replication services.

Benefits of VxVM and RAID Arrays


RAID arrays virtualize individual disks into logical LUNS which are accessed by
host operating systems as "physical devices." that is, using the same method of
addressing as a directly-attached physical disk.
VxVM virtualizes both the physical disks and the logical LUNs presented by a
RAID array. Modifying the configuration ofa RAID array may result in changes in
SCSI addresses of LUNs, requiring modification of application configurations.
VxVM provides an effective method ofrcconfiguring
and resizing storage across
the logical devices presented by a RAID array.
When using VxVM with RAID arrays. you can leverage the strengths of both
technologies:
You can use Vx VM to mirror between arrays \0 improve disaster recovery
protection against the failure of an array. particularly if one array is remote.
Arrays can be of different manufacture: that is, one array can be a RAID array
and the other a J80D.
VxVM facilitates data reorganization and maximizes available resources.
VxVM improves overall performance by making 1/0 activity parallel for a
volume through more than one 110 path to and within the array.
You can use snapshots with mirrors in different locations. which is beneficial
for disaster recovery and off-host processing.
If you include VERITAS Volume Rcplicaror (VVR) in your environment,
VVR can be used to provide hardware-independent replication services.

VERITAS

tntro-10
Copyngtll'

Storage

Foundation

LU06Svmantec Corporaucn All nqtits reserved

5.0 for UNIX. Fundamentals

,S}11Hlntt'C

Storage Foundation Curriculum Path

VERITAS Storage
Foundation for
UNIX:
Fundamentals

~------------

VERITAS Storage
Foundation for
UNIX:
Maintenance

---v--------------~

VERIT AS Storage Foundation

for UNIX

VERITAS Storage Foundation Curriculum


VERITASStorage Foundationfor UNIX' Fundamentalstraining is designed
provide you with comprehensive instruction on making the most ofVERITAS
Storage Foundation.

10

Inlro-11

Course Introduction
Cor-yriqht 2006 Symantec Corporauon. All

rights

reserved

syrnantcc

Storage Foundation

Fundamentals:

Overview

Lesson 1: Virtual Objects

Lesson 2: Installation and Interfaces


Lesson 3: Creating a Volume and File
System

Lesson 4: Selecting Volume Layouts

Lesson 5: Making Basic Configuration


Changes

Lesson 6: Administering

Lesson 7: Resolving
Problems

VERITAS Storage Foundation

File Systems

Hardware

for UNIX: Fundamentals

Overview

This training provides comprehensive instruction un operating the file and disk
management foundation products: VERITAS Volume Manager (VxVM) and
VERITAS File System (VxFS). In this training. you learn how to combine file
system and disk management technology to ensure easy management of all storage
and maximum availability of essential data.
Objectives
After completing this training. you will be able to:
Identify VxVM virtual storage objects and volume layouts.
Install and configure Storage Foundation.
Configure and manage disks and disk groups.
Create concatenated. striped, mirrored. RAID-5, and layered volumes.
Configure volumes by adding mirrors and logs and resizing volumes and tile
systems.
Perform tile system administration.
Resolve basic hardware problems.

Intro-12

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynght'~

2006 Svmantec Corpoauon

All nghl<; reserved

Course

S)111<1ntt'(

Resources

Lab Exercises (Appendix A)


Lab Solutions

(Appendix B)

Glossary

Additional Course Resources


Appendix A: Lab Exercises
This section contains hands-on exercises that enable you to practice the concepts
and procedures presented in the lessons.

Appendix B: Lab Solutions


This section contains detailed solutions to the lab exercises for each lesson.

Glossary
For your reference. this course includes a glossary ofterms
Storage Foundation.

related to V[RITAS

tntro-13

Course Introduction
Copyrigtllf'

2006

Symanter

Corporation

All rights

reserved

Typographic

Conventions

Used in This Course

The following

tables describe the typographic conventions used in this course.

Typographic Conventions in Text and Commands


Convention

Element

Examples

Courier Nell.
bold

Command input.
both syntax and
examples

To display the robot and drive configurauon:


tpconfig

vxdisk
Courier New.
plain

lu the output:

Command
names. directory
names. tile
names. path
names. user

protocol
protocol
protocol

passwords. URLs
when used within
regular text
paragraphs.
Variables in
command syntax,
and examples:

Variables in
command input
are Italic. plain.
Variables in
command output
are ltulic. bold.

alldgs

-0

Command output

names.

Courier New.
Italic. bold or
plain

-d

To display disk information:


list

40
60
0

- mlnlmum:
- maximum:

-current:

Locale the al tnames


Go 10http:

directory.

/ /www.symantec.com.

Enler the value 300.


Log on as user l.

To install the media server:


/cdrom_directory/install
To access a manual page:
man command

-name

To display detailed information

lor a disk:

vxdisk
-g
disk - name

list

disk_group

Typographic Conventions in Graphical User Interface Descriptions


Convention

Element

Examples

Arrow

Menu navigation paths

Select File -->Save.

Buttons. menus. windows,


options. and other interface
clements

Open the Task Sial us


window.

Initial

capitalization

Select the Next buuon.

Remove the checkmark


trorn the Print File check
box.
()uotation marks

lutertucc clements with


long names

Intro-14

Select the "Include


subvolumes in object view
window" check box.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyright ~ ;!()Clfi Svmanter: Corporation

AU fights reserved

Lesson 1
Virtual Objects

symantcc

Lesson Introduction
Lesson 1: VirtU!!L9_bl~~ts

Lesson 2: Installation

and Interfaces

Lesson 3: Creating a Volume and File


System

Lesson 4: Selecting Volume Layouts

Lesson 5: Making Basic Configuration


Changes

Lesson 6: Administering

Lesson 7: Resolving
Problems

File Systems

Hardware

svmantec

Lesson Topics and Objectives


After completing
will be able to:

Topic

this lesson, you

Topic 1: Physical Data


Storage

Identify the structural characteristics of


a disk that are affected by placing a
disk under VxVM control.

Topic 2: Virtual Data


Storage

Describe the structural characteristics


of a disk after it is placed under VxVM
control.

Topic 3: Volume Manager


Storage Objects

Identify the virtual objects that are


created by VxVM to manage data
storage, including disk groups, VxVM
disks, subdisks, piexes, and volumes.

Topic 4: Volume Manager


RAID Levels

Define VxVM RAID levels and identify


virtual storage layout types used by
VxVM to remap address space.

VERITAS SLorage Foundation 5.0 for UNIX: Fundamentals

1-2
COPYlIgl1l.~

2006

Symanter.

Ccq-orauon

All fights

rnservoc

Physical

S)11Jan\,'(

Disk Structure

Physical storage objects:


The basic physical storage device that ultimately stores
your data is the hard disk.
When you install your operating system, hard disks are
formatted as part of the installation program.

Partitioning is the basic method of organizing a disk to


prepare for files to be written to and retrieved from the
disk.

A partitioned disk has a prearranged storage pattern that


is designed for the storage and retrieval of data.

Solaris

HP-UX

AIX

Linux

Physical Data Storage


Physical

Disk Structure

Solaris
A physical disk under Solaris contains the partition table of the disk and the volume Table
of contents (VTOC) in the first sector 151~ bytes) of the disk. The VTOC has at least an
entry lor the backup partition on the II hole disk (partition tag 5, normally partition number
2), so the OS may work correctly with the disk. The VTOC is always a part of the backup
partition and may be part ota standard data partition. You can destroy the VTOC using the
raw device driver on that partition making the disk immediately unusable.

Sector 0 of disk: VTOC


Sector 1-15 of
/ partition: bootblock

Partitions
(Slices)

~1\US'

Partition 2 (backup slice)


refers to the entire disk.
1

Lesson 1 Virtual Objects


Copyright

1'. 2006

Symaruec

Corporation

All nqtus

reserved

If the disk contains the partition fur the rout file system mounted on / (partition tag 2), lor
example of an OS disk, this root partition contains the bootblock for the first boot stage
after the Open Bout Prom within sector I - 15. Sector 0 is skipped, so there is no
overlapping
between VTOe and boorblock. if the root partition starts at the beginning of
the disk.
The li"t sector ofa file system un Soluris cannut start before sector 16 of the partition.
Sector 16 contains the main super block of the file system. Using the block device driver
of the file system prevents VTOC and boot block from being overwritten by applicuuon
data.

Note: 011 Solaris. VxVM 4. I and later support EFI disks. EFI disks are an lntcl-based
technology that allows disks to retain BIOS code.
HP-lJX
On an HP-UX system. the physical disk is traditionally
disk approach or Logical Volume Manager (LVM).

HP-UX Disk
cOtld4

partitioned

using either the whole

LVM Disk
cOtld4

The whole disk approach enables you tu partition a disk in five ways: the whole disk is
used by a single file system: the whole disk is used as swap area: the whole disk is
used as a raw partition: a portion of the disk contains a file system, and the rest is used
as swap: or the boot disk contains a 2-MB special boot are". the root file system, and a
swap area.
An LVM data disk consists of four areas: Physical Volume Reserved Area (PVRA):
Volume Group Reserved Area (VGRA): user data area: and Bad Block Relocation
Area I BBRA).

AIX
A native AIX disk docs not have a partition table "I' the kind familiar on many other
operating systems. such as Solaris, Linux, and Windows. An application could use the
entire unstructured raw physical device. but the lirst 5 I 2-byte sector normally contains
intunn.uion, including a physical volume identifier (pvid) to support recognition of the
disk by AIX. An AIX disk is managed by IBM's Logical Volume Manager (LVM) by
defuult. A disk managed by LVM is called a physical volume (PV). A physical volume
consists of:
PV reserved area: A physical volume begins with a reserved area of 128 sectors
containing I'V metadatu. including the pvid.

1-4

VERITAS
COPYright

';:' 2(1()h Sytn<ll1le\:

Storage
Corporauon

Foundation
All [lgtHS

reserveo

5.0 for UNIX: Fundamentals

Volume Group Descrlptur Area (\'G[)A):


One or two copies of the V[X;/\ tollows.
The V(jDA contains information describing a volume group (Vt.i ), which consists of
one or more physical volumes. Included in the mctadata in the V(j[)A is the defiuiuon
of the physical partition (PI') Sill'. nonnally-t MH,
Physlcal partitions: The remainder of the disk is divided into a number of physical
partitions, All of the PVs in a volume group have PI's of the same size. as defined in
the VGDA, In a normal VG. there can be up to.n PI's in a P\'. In a big VCi. there can
be up to 12R PI's in a Pv.

Physical volume
reserved area
(128 sectors)

,Raw
device
,hdisk3

Volume Group
Descriptor Areas

Physical partitions
(equal size,
defined in VGDA)

The term partition is used differently in different operating systems. In many kinds of
UNiX. Linux, and Windows. a partition is a variable sized portion of contiguous disk
space that can be formatted to contain a file system. in LVM. a PI' is mapped to a logical
partition 11..1'). and one or more LPs from any location throughout the V(j can be
combined to define a logical volume (LV). A logical \'0IU111eis the entity that can be
formatted to contain a file system (by default either .IFS or .IFS2), So a physical partition
compares in concept more closely to a disk allocation cluster in some other operating
systems. and a logical volume plays the role that a partition does in some other operating
systems.
Linux
On Linux. a nonboot disk can be divided into one to lour primary partitions. One of these
primary partitions can be used to contain logical partitions. and it is called the extended
partition. The extended partition can have lip to I ~ logical partitions on a SCSi disk and lip
to 60 logical partitions on an IDE disk, You can use fdisk to set up partitions on a Linux
disk,

1-5

Lesson 1 Virtual Objects


CopyriglH '-': 2006 Symantcc Corporation All fights reserveo

Primary Partition 1
/dev/sdal or/dev/hdal

Primary Partition 2
/dev/sda2or /dev/hda2

Primary Partition 3
/dev/sda3or /dev/hda3

Primary Partition 4
(Extended Partition)
/dev/sda4
/dev/hda4

On a l.inux boot disk. the boot partition must be a primary partition and is typically
located within the first 1024 cylinders of the drive. On the boot disk. you must also have a
dedicated swap partition. The swap part ition can be a primary or a logical partition. and it
can be located anywhere on the disk.
Logical partitions must be contiguous. but they do not need to take up all of the space of
the extended partition. Only one primary partition can be extended. The extended partition
docs not take up any space until it is subdivided into logical partitions.

VERITAS Volume Manager 4.0 for Linux does not support most hardware RAID
controllers currently unless they present SCSI device interfaces with names of the
form / dev / sdx.
The following controllers are supported:
PERC, on the Dell 1650
MegaRAID. on the Dell 1650
ScrvcRAID.

on x440 systems

Compaq array controllers that require the Smart I and CCISS drivers (w hich present
device paths. such as I dev I idal c #d#p # and I dev I cc i ssl c #d#p#) arc supported
lor normal use and for rootnbiliry.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

16
Cupynqht

;1.0011 Svmanu-c Corporation

All fights

rcserveo

,S)l1HHlh'(

Physical Disk Naming

VxVM parses disk names to retrieve connectivity


information for disks. Operating systems have different
conventions:
Operating System

Device Naming Convention Example

Solaris

/dev/[rJdsk/c1t9dOs2

HP-UX

/dev/

AIX

/dev/hdisk2

[rJ dak/c3t2dO

Linux

SCSI disks:

(no slice)

(no slice)

/dev/ade [1-4J (primary partitions)


/dev/ade [5 -16J (logical partitions)
/dev/adbN(on the second disk)
/ dev / adcN (on the third disk)
IDE disks:
/dev/hdeN. /dev/hClbN./dev/hdcN

Physical Disk Naming


Solaris

You locate and access the data on a physical disk by using a device name that
specifies the controller, target ID. and disk number. A typical device name uses the
format: c#t#d#.
c# is the controller number.
t# is the target !D.
d# is the logical unit number (LUN) of the drive attached to the target.

Ira disk is divided into partitions. you also specify the partition number in the
device name:
s# is the partition

(slice) number,

For example. the device name cOtOdOsl is connected to controller number 0 ill
the system. with a target ID oro. physical disk number O. and partition number I
011 the disk.
HP-liX

You locate and access the data on a physical disk by using a device name that
specifies the controller. target ID, and disk number. A typical device name uses the
format: c#t#d#.
c# is the controller number.
t#

is the target !D.

d# is the logical unit number (LUN) of the drive attached to the target.

Lesson 1 Virtual Objects

1-7
Copyright

.~.. 2006

Svmanter

Corporauon

All fights

reserved

For example, the cOt OdO device name is connected to the controller number
the system, with a target 10 of 0, and the physical disk number O.

in

AIX

Every device in AIX is assigned a location code that describes its connection to the
system. The general format of this identifier isAB-CD-EF-GH,
where the letters
represent decimal digits or uppercase letters. The first two characters represent the
bus. the second pair identify the adapter, the third pair represent the connector, and
the tinal pair uniquely represent the device. For example, a SCSI disk drive might
have a location identifier of 04 - 01- 00 - 6, O. In this example, 04 means the PCI
bus, 01 is the slot number on the PCI bus occupied by the SCSI adapter, 00 means
the only or internal connector, and the 6,0 means SCSIID 6, LUN o.
However, this data is used internally by AIX to locate a device. The device name
that a system administrator or software uses to identify a device is less hardware
dependant. The system maintains a special database called the Object Data
Manager (ODM) that contains essential definitions for most objects in the system,
including devices. Through the ODM. a device name is mapped to the location
identifier. The device names are referred to by special files found in the / dev
directory. For example, the SCSI disk identified previously might have the device
name hdisk3 (the fourth hard disk identified by the system). The device named
hdisk3 is accessed by the file name /dev/hdisk3.
If a device is moved so that it has a different location identifier, the ODM is
updated so that it retains the same device name. and the move is transparent to
users. This is facilitated by the physical volume identifier stored in the first sector
of a physical volume. This unique 128-bit number is used by the system to
recognize the physical volume wherever it may be attached because it is also
associated with the device name in the ODM.
Linux

On Linux, device names are displayed in the format:

sdx [N]

hdx [N]

In the syntax:
sd refers to a SCSI disk, and hd refers to an EIDE disk.

x is a letter that indicates the order of disks detected by the operating system.
For example, sda refers to the first SCSI disk, sdb refers to the second SCSI
disk. and so on.
N is an optional parameter that represents a partition number in the range I
through 16. For example. sda 7 references partition 7 on the first SCSI disk.

Primary partitions on a disk are I. 2, .~.4: logical partitions have numbers 5 and up.
If the partition number is omitted, the device name indicates the entire disk.

1-8

VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals


Copynqht ', ;'[)Of) Symantec Corporaucn All fights reserverr

Physical Data Storage


Reads and writes
on unmanaged
physical disks
can be a slow
process .

Disk arrays and


multipathed disk
arrays can
improve 110 speed
and throughput.

\I Applications\!

Users

00

PhYSi!1[!LUNS

Databases \

00

Disk array: A collection of physical disks used


to balance 1/0 across multiple disks
Multipathed disk array: Provides multiple ports
to access disks to achieve performance and
availability benefits
Note: Throughout this course, the term disk is used to mean either disk or LUN.
Whatever the OS sees as a storage device, VxVM sees as a disk.

Disk Arrays
Reads and writes on unmanaged physical disks can be a relatively slow process,
because disks are physical devices that require time to move the heads to the
correct position on the disk before reading or writing. If all of the read and write
operations are performed to individual disks. one at a time. the read-write time can
become unmanageable.
1/1"'(11' is a collection of physical disks. Performing 110 operations on
multiple disks in a disk array can improve 1/0 speed and throughput.

A disk

Hardware arrays present disk storage to the host operating system as LUNs.
Multipathed Disk Arrays
Some disk arrays provide multiple ports to access disk devices. These ports.
coupled with the host bus adaptor (IIBA) controller and any data bus or 110
processor local to the array. compose multiple hardware paths to access the disk
devices. This type of disk array is called a muttipathed disk aI'I'O\'.
You can connect rnultipathed disk arrays to host systems in many different
configurations. such as:
Connecting multiple ports to different controllers on a single host
Chaining ports through a single controller on a host
Connecting ports to di tferent hosts simultaneously

Lesson 1 Virtual Objects

1-9
Cop'r'riglll'(~

2006

Symantec

Corporation

All

rights. roservoo

symarucc

Virtual Data Storage


Volume Manager
creates a virtual
layer of data
storage.

Multidisk
configurations:
Concatenation

Volume Manager
volumes appear to
applications
to be physical disk
partitions.

Mirroring
Striping
RAIDS

High Availability:

Volumes have
block and character
device nodes in the
Zdev tree:
Idev/vxl
lr l dsk/ ...

Disk group
import and deport
Hot relocation
Dynamic
multipathing

LoadBalancing

Disk Spanning

Virtual Data Storage


Virtual Storage Management
VER lTAS Volume Manager creates a virtual level of storage management above
the physical device level by creating virtual storage objects. The virtual storage
object that is visible to users and applications is called a 1'0111111<'.
What Is a Volume?
A volume is a virtual object, created by Volume Manager, that stores data. A
volume consists of space from one or more physical disks on which the data is
physically stored.
How Do You Access a Volume?
Volumes created by VxVM appear to the operating system as physical disks, and
applications that interact with volumes work in the same way as with physical
disks. All users and applications access volumes as contiguous address space using
special device files in a manner similar to accessing a disk partition.
Volumes have block and character device nodes in the / dev tree. You can supply
the name of the path to a volume in your commands and programs, in your file
system and database configuration files, and in any other context where you would
otherwise use the path to a physical disk partition.

1-10

VERITAS
Copynght

'6 2006

Symantec

Storage
Comor

auon

Foundation
All fights

reserved

5.0 for UNIX: Fundamentals

Volume Manager Control

When you place a disk under VxVM control, a cross-platform data sharing (CDS)
disk layout is used, which ensures that the disk is accessible on different
platforms, regardless of the platform on which the disk was initialized.

-------.L~_~

I- ..

lOS-reserved areas
that contain:
I " Platform blocks
I " VxVM 10 blocks
" AIX and HP-UX
I. coexistence labels

Public
Region

Volume Manager-Controlled

Disks

With Volume Manager. you enable virtual data storage by bringing a disk under
Volume Manager control. By default in VxVM 4.0 and later. Volume Manager
uses a cross-platform data sharing (CDS) disk layout. A CDS disk is consistently
recognized by all VxVM-supported
UNIX platforms and consists of:
Ox-reserved area: To accommodate plat tonn-spcci fie disk usage. f 2RK is
reserved for disk labels. platform blocks. and platform-coexistence
labels.
Private region: The private region stores information. such as disk headers.
configuration copies. and kernel logs. in addition to other platform-specific
management areas that VxVM uses to manage virtual objects. The private
region represents a small management overhead:
Operating System

Default Block/Sector Size

Default Private Region Size

Solaris

512 bytes

65536 sectors (I 024K)

HI'-UX

1024 bytes

3276R sectors ( I024K)

AIX

512 bytes

65536 sectors (I024K)

Linux

512 bytes

65536 sectors ( I 024K)

Public region: The public region consists of the remainder of the space on the
disk. The public region represents the available space that Volume Manager
can LIseto assign to volumes and is where an application stores data. Volume
Manager never overwrites this area unless specifically instructed to do so.

Lesson

1-11

1 Virtual Objects
Copvriqht 'I' 2006 svoteotec Corporation

All rights reservco

syrnantec.

Comparing

CDS and Pre-4.x Disks

CDS Disk

Sliced Disk

(>4.x Default)

(Pre-4.x Solaris Default)

Simple Disk
(Pre .x HP-UX

Default)

Private region
(metadata) and
public region (user
data) are created on
a single partition.

Private region and


public region are
created on
separate
partitions.

Private region and


public region are
created on the
whole disk with
specific offsets.

Suitable for moving


between different
operating systems

Not suitable for


moving between
different operating
systems
Suitable for boot
partitions

Not suitable for


moving between
different operating
systems
Suitable for boot
partitions

Not suitable for


boot partitions

Note: This format is


called hpdisk format
as of VxVM 4.1 on the
HP-UX platform.

Comparing
The pre-t.v

CDS Disks and Pre-4.x Disks


disk layouts arc still available in VxVM
the boot disk under VxVM

are used for hringing

4.0 and later. These layouts

control on operating systems that

support that capability,


On platforms that support bringing the boot disk under V.xVM control, CDS disks
cannot be used tor boot disks. CDS disks have specific disk layout requirements
that enable a common disk layout across different platforms, and these
requirements arc not compatible with the particular platform-specific
requirements
of boot disks. Therefore, when placing a hoot disk under VxVM control. you must
use a prc-4.x disk layout (sliced on Solaris, hpdisk on HP-UX).
For non boot disks, you can convert CDS disks to sliced disks and vice versa by
using VxVM

utilities.

Other disk types, working

with boot disks, and transferring

data across platforms

with CDS disks are topics covered in detail in later lessons.

VERITAS

1-12
Cupyrlght

L 200t)

Symantec

Storage Foundation
Corporation

All

f,gtl1s

reserved

5.0 for UNIX: Fundamentals

Volume Manager Storage Objects


Volumes

Ploxes

acctdg

Disk Group

payvol

expvol
acctd
01-01
acctdgO~ - O~
llcctdg03-02

expvol-Ol

VxVM Disks

acctdg01-02
acctdg02-01

llcctdg03-01

payvol-Ol

payvol-02

acctdg03

Subdisks

Physical Disks

Volume Manager Storage Objects


Disk Groups
A disk group is a collection of Vx VM disks that share a common configurution.
You group disks into disk groups for management purposes. such as to hold the
data for a specific application or set of applications. For example. data for
accounting applications can be organized in a disk group called acctdg. A disk
group contigurcttion is a set olrccords with detailed information about related
Volume Manager objects in a disk group. their attributes. and their connections.
Volume Manager objects cannot span disk groups. For example. a volume's
subdisks. plexes. and disks must be derived from the same disk group as the
volume. You can create additional disk groups as necessary. Disk groups enable
you to group disks into logical collections. Disk groups and their components can
be moved as a unit from one host machine to another.
Volume Manager Disks
A Volume Manager (VxVM)

disk represents the public region ota physical disk


that is under Volume Manager control. Each VxVM disk corresponds to one
physical disk. Each VxVM disk has a unique virtual disk name called a disk media
name,The disk media name is a logical name used lor Volume Manager
administrative purposes. Volume Manager uses the disk media name when
assigning space to volumes. A VxVM disk is given a disk media name when it is
added to a disk group.
Default disk media name: diskgrollplili

Lesson

Virtual Objects

1-13
Copyright

~~.2Un6

Svmantec

Corporanon

All rights

reserved

You can supply the disk media name or allow Volume Manager to assign a default
name. The disk media name is stored with a unique disk ID to avoid name
collision. After a VxVM disk is assigned a disk media name, the disk is no longer
referred 10 by its physical address. The physical address (for example, clltlldll or
hdiskll) becomes known <IS the disk access record.
Subdisks
A VxVM disk can be divided into one or more subdisks. A subdisk is a set of
contiguous disk blocks that represent a specific portion ofa VxVM disk, which is
mapped to a specific region of a physical disk. A subdisk is a subsection of a disk's
public region. A subdisk is the smallest unit of storage in Volume Manager.
Therefore, subdisks are the building blocks for Volume Manager objects.
A subdisk is defined by an offset and a length in sectors on a VxVM disk.
Default subdisk name: DMname-1l1l
A Vx VM disk can contain multiple subdisks, but subdisks cannot overlap or share
the same portions ofa VxVM disk. Any VxVM disk space that is not reserved or
that is not part of a subdisk is free space. You can use free space to create new
subdisks.
Conceptually, a subdisk is similar to <I partition. Both a subdisk and a partition
divide a disk into pieces defined by an offset address and length. Each of those
pieces represent a reservation of contiguous space on the physical disk. However,
while the maximum number of partitions to a disk is limited by some operating
systems, there is no theoretical limit to the number of subdisks that can be attached
to a single plex. This number has been limited by default to <I value 01'4090. If
required, this default can be changed, using the vo1_ subdisk _ num tunable
parameter. For more information on tunable parameters, see the I'ERITAS Volume
.Mal/agerAdministrator '.1 Guide.
Plexes
Volume Manager uses subdisks to build virtual objects called plexes. A plex is a
structured or ordered collection of subdisks that represents one copy of the data in
a volume. A plex consists of one or more subdisks located on one or more physical
disks. The length of a plex is determined by the last block that can be read or
written on the last subdisk in the plex.
Default plcx name: volume_name-Il#
Volumes
A volume is a virtual storage device that is used by applications in a manner
similar to <I physical disk. Due 10 its virtual nature, a volume is not restricted by the
physical size constraints that apply to a physical disk. A VxVM volume can be as
large as the total of available. unreserved free physical disk space in the disk
group. A volume consists of one or more plcxes.
Default volume name: vol ul1le name##

1-14

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyright l(; 2006 Syruantec Corporation All fights reserved

~
Volume

S)111'1I1t('(

Layouts

Volume layout: The way plexes are configured to remap the


volume address space through which 1/0 is redirected

[--R-~-;;i~;~~;1

.....
Di~~~U

Layered

Striped

Concatenated

RAID-O

Data Redundancy
Mirrored

RAID-5

Striped and

RAl0-O+1

RAl0-5

Volume Manager RAID Levels


RAID
RAID is an acronym for Redundant Array of Independent Disks. RAID is a
storage management approach in which an army of disks is created. and part of the
combined storage capacity of the disks is used to store duplicate information about
the data in the array. By maintaining a redundant array of disks. you can regenerate
data in the case of disk failure.
RAID configuration models arc classified in terms of RAID levels. which arc
defined by the number of disks in the array. the way data is spanned across the
disks. and the method used for redundancy. Each RA ID level has speci lie features
and performance benefits that involve a trade-oil between performance and
reliability.
Volume Layouts
RAID levels correspond to volume layouts. A volume's layout refers to the
organization of plexcs in a volume. Volume layout is the way plcxes are
configured to remap the volume address space through which 110 is redirected at
run-time. Volume layouts are based on the concepts of disk spanning. redundancy.
and resilience.
Disk Spanning
Disk spanning is the combining of disk space from multiple physical disks to Iorrn
one logical drive. Disk spanning has two forms:

Lesson

1 Virtual

Objects

1-15
Copyright!':

2006

Symantec

Corporation

All rights

reserved

Concatenation: Concatenation is the mapping of data in a linear manner


across two or more disks.
In a concatenated volume. subdisks are arranged both sequentially and
contiguously within a plex. Concatenation allows a volume to be created from
multiple regions of one or more disks if there is not enough space for an entire
volume on a single region of a disk.

Striping: Striping is the mapping of data in equally-sized chunks alternating


across multiple disks. Striping is also called interleaving.
In a striped volume. data is spread evenly across multiple disks. Stripes are
equally-sized fragments that are allocated alternately and evenly to the
subdisks of a single plcx. There must be at least two subdisks in a striped plex ,
each of which must exist on a different disk. Configured properly. striping not
only helps to balance 1/0 but also to increase throughput.
Data Redundancy
To protect data against disk failure, the volume layout must provide some form of
data redundancy. Redundancy is achieved in two ways:
Mirroring: Mirroring

is maintaining two or more copies of volume data.

A mirrored volume uses multiple plcxcs to duplicate the information contained


in a volume. Although a volume can have a single plex, at least two are
required for true mirroring (redundancy of data). Each of these plexes should
contain disk space from different disks for the redundancy to be useful.
Parity: Parity is a calculated value used to reconstruct data after a failure by
doing an exclusive OR (XOR) procedure on the data. Parity information can be
stored on a disk. Ifpart ofa volume fails, the data on that portion of the failed
volume can be re-created from the remaining data and parity information.
A RAIO-S volume uses striping to spread data and parity evenly aeross
multiple disks in an array. Each stripe contains a parity stripe unit and data
stripe units. Parity can be used to reconstruct data if one of the disks fails. In
comparison to the performance of striped volumes, write throughput of RAI DS volumes decreases, because parity infonnauon needs to be updated each time
data is accessed. However. in comparison to mirroring, the use of parity
reduces the amount of space required.
Resilience
A resilient volume, also called a layered volume, is a volume that is built on one or
more other volumes. Resilient volumes enable the mirroring of data at a more
granular level. For example. a resilient volume can be concatenated or striped at
the top level and then mirrored at the bottom level.
A layered volume is a virtual Volume Manager object that nests other virtual
objects inside of itself. Layered volumes provide better fault tolerance by
mirroring data at a more granular level.

VERITAS Storage Foundation 5.0 for UNIX' Fundamentals

1-16
CopYrlyhl

'&

2006

Symantec

Corporauon

AU nqhts

reserved

Lesson

syrnanrec

Summary

Key Points
This lesson described the virtual storage objects
that VERITAS Volume Manager uses to manage
physical disk storage, including disk groups,
VxVM disks, subdisks, plexes, and volumes.
Reference Materials
VERITAS Volume Manager Administrator's Guide

'symantl'C

Lab 1
Lab 1: Introducing the Lab Environment
In this lab, you are introduced to the lab
environment, system, and disks that you will use
throughout this course.

For Lab Exercises,


For .t:cabSoluti~!!s,

see Appendix
see Appendix

Labs and solutions for this lesson arc located on the following
Appendix

A provides complete lab instructions.


p:I',!C i\-,~

A.
B,

pages:

"Lib I' IrilJ"duc'ing tile: ! ,;\11

Lnvironmctu."
Appendix
1,llrodliCIII;2

Lesson

B provides complete lab instructions and solutions, "I .ab 1 S"IUlioI1S:


the 1:11) 1..11\in1J1!l1CIlI," rage' B-,;

1-17

1 Virtual Objects
Copyright'~ 2006 Svmantec Corporation All rights reserved

1-18

VERITAS
Copynghi .~ 2006 Svmantec

Storage Foundation
Corporation. AlIlIghl!'i resorvec

5.0 for UNIX: Fundamentals

Lesson 2
Installation and Interfaces

syrnantec

Lesson Introduction

Lesson 1: Virtual Objects

~_Lesson~'!..sta~lation
andInterface.s_~"

"~,

Lesson 3: Creating
System

a Volume and File

Lesson 4: Selecting

Lesson 5: Making Basic Configuration


Changes

Lesson 6: Administering

Lesson 7: Resolving
Problems

.AS:,

#'lli~1[ii-Jl

Volume

Layouts

File Systems

Hardware

svmantcc

Lesson Topics and Objectives


Topic

After completing
be able to:

this lesson, you will

Topic 1: Installation
Prerequisites

Identify operating system compatibility


other preinstallation considerations.

Topic 2: Adding License Keys

Obtain license keys, add licenses by using


vxlic inst, and view licenses by using
vxlicrep.

Topic 3: VERITAS Software


Packages

Identify the packages that are included in the


Storage Foundation 5.0 software.

Topic 4: Installing Storage


Foundation

Install Storage Foundation interactively,


using the installation utility.

by

Topic 5: Storage Foundation


User Interfaces

Describe the three Storage Foundation


interfaces.

user

Topic 6: Managing the VEA


Server

Install, start, and manage the VEA server.

VERITAS

2-2
Copynyhl:-

2006

Svmantec

Storage Foundation
Corporaucn.

All fights

reserveu

and

5.0 for UNIX: Fundamentals

'''l!t ..,~.

as

..

",

,,,

~[

S}ll1;!n1CC.

Compatibility

The VERITAS Storage Foundation product line


operates on the following operating systems:
SF
Version

Solaris
Version

HP-UX
Version

Linux
Version

AIX
Version

11i.v2 (0904)

5.2,5.3

RHEL 4 Update 3,
SLES 9 SP3

11i.v2 (0904)

No release

RHEL 4 Update 1 (2.6),


SLES 9 SP1

7,8,9

No release

5.1, 5.2, 5.3 RHEL 3 Update 2 (1686)

2.6,7,8

11.11i (0902)

No release

5.0

8,9,10

4.1

8,9,10,

4.0
3.5.x

x86

No release*

Note: Version 3.2.2 on Linux has


functionality equivalent to 3.5 on Solaris.

Installation

Prerequisites

OS Version Compatibility
Before installing Storage Foundation. ensure that the version of Storage
Foundation that you are installing is compatible with the version of the operating
system that you are running. You may need to upgrade your operating system
before you install the latest Storage Foundation version.
VERITAS StorageFoundation 5.0 operates on the following operating systems:
Solaris

Solaris 8 (SPARe Platform 32-bit and 64-bil)


Solaris 9 (SPARe Platform 32-bit and M-bil)
Solari, 10 (SPARe Platform M-bil)

HP-UX

September 2004 release of HP-UX II i version 2.0 or later

AIX

AIX 5.2 ML6 (legacy)


AIX 5.3 TL4 with SP4

Llnux

Red Hat Enterprise l.inux 4 (RIIEL 4) with Update 3 (2.6.9-34


kernel) on AMD Optcron or Intel Xeon EM64T (xX6_ (4)
SUSE Linux Enterprise Server <) (SLES 9) with SP3 (2.6.5-7.244.
252 kernels) on AMD Opteron or Intel Xcon EM64T (xR6_(4)

Check the /'F:R1TAS Storag Foundation Release No/es Ior additional operating
system requirements.

Lesson

2 Installation

and Interfaces
Copyright b 2006 svoantcc

2-3
Corporation All nqhts reserved.

symantec.

Support Resources
Il;'l,q!

Storag~ Founnanonfor U~~IX

Search

Products

for Technotes

I Support

services

~~E:~~,~""""
~
1Wf'l'tIttlfUialln.

r~'-"-'''-''-F-"-,,,,-,-.,,--~----rp~a~tc~h~e~s=;I--------'--------------"un ..," tlf~d ", lI.IM .


'."".
'->."';' ~:.;tn ~J ""~':
. '!It-:.," s. ~~, "'.1)00;<1':><;"<
r\,.,,~,',1""rH .'.'''~'''''' ~,~
..
",~~'""'''''.''''f C.h""-"

.,~,.'"'At.~~'""~

'>'\'

~1'''fl''t'"F''''''''~\I'r.-~,t>.U'

hll " .lo .11<,11'.


Jo1'.~$ot> ,~'~.;~~'''' ""~,\!~~..,,,~ ,.~.~~~,,<;<'"

':l'""c.- ..
"'!! .......
I"*"""~-~,~ 'w,'<q- .
"""'~r.j'f'

".""' ...

","

~" It ,"',.,.I!

'''' .'~ .'" :.:..,..,..

,,-

F<.'\A>u<III._

( . "-1)'

(',ur4''''',IIa",I.il<''''I''"'
"

~~:

..,_'''

,'''~

l01 ,.,..,

:t"".~.1":1""'-. ""'.1,1-:" "''to


"'j<.('.o.:', "~"""'''''~ "',-.~ ..1'

http://support.veritas.com

"'"'".;>

....
., .,~
r"" .'"

+"" ,~._.

Version Release Differences


With each new release of the Storage Foundation

software. changes are made that

may affect the installation or operation of Storage Foundation in your


environment. By reading version release notes and installation documentation
are included with the product, you can stay informed of any changes,

that

For more information about specific releases ofYERITAS


Storage Foundation,
visit the YERITAS Support Web site at: http:
/ / support.
veri tas. com
This site contains product and patch information,
technical notes, access to product-specific
services, and other infonnation

about contacting

Note: If you open a case with YERITAS

a searchable knowledge

base of

news groups and c-mail norification


technical support staff.

Support. you can view updates at:

http://support.veritas.com/viewcase
You can access your case by entering the e-mail address associated with your case
and the case number.

2-4

VERITAS

Storage

Foundation

Copynqhtc. 2006 Svmaruec Ccmoranon All fights reserved

5,0 for UNIX: Fundamentals

,S)111illllt'L

Storage Foundation Licensing


Licensing utilities are contained in the VRTSvlic
package, which is common to all VERITAS products.
To obtain a license key:
- Create a vLicense account and retrieve license keys online.
vLicense is a Web site that you can use to retrieve and
manage your license keys.
or
- Complete a License Key Request form and fax it to
VERITAS customer support.

To generate a license key, you must provide your:


- Software serial number
- Customer number
- Order number
Note: You may also need the network and RDBMS platform, system
configuration, and software revision levels.

Adding License Keys


You must have your license key before you begin installation. because you are
prompted for the license key during the installation process. A new license key is
nut necessary if you are upgrading Storage Foundation from a previously licensed
version of the product.
lfyou have an evaluation license key, you must obtain a permanent license key
when you purchase the product. The VER[TAS licensing mechanism checks the
system date to verify that it has not been set back. [I' the system date has been reset.
the evaluation license key becomes invalid.
Obtaining a License Key
License keys arc delivered on Software License Certificates to you at the
conclusion of the order fulfillment process. The certificate specifics the product
keys and the number of product licenses purchased. A single key enables you to
install the product un the number and type of systems for which you purchased the
license.
License key arc non-node locked.
In a non-node locked model. one key can unlock a product on different servers
regardless ufllost ID and architecture type.
In a nude locked model. a single license is tied to a single specific server. For
each server. you need a di fferent key.

Lesson 2 Installationand Interfaces


Copyright

2-5
C 2006

Symamec

Corporation.

All

nqhts

reserved

symaruec

Generating License Keys


[http://vli-~~~-s-e-.
v~-;i-ta-s~.
-c~Jf-~---'

To add a license

I. Access automatic
, license key generation
and delivery.
I Manage and track
license key inventory
and usage.
I

key:

vxlicinst

License

keys are installed

in:

/etc/vx/licenses/lic

To view installed
information:

license

key

vxlicrep

Displayed
information
includes:
- License key number
- Name of the VERIT AS
product that the key enables
- Type of license
- Features enabled by the key

Locate and reissue lost


license keys.
Report, track, and
resolve license key
issues online.
Consotidate and share
license key information
with other accounts.

Generating License Keys with vLicense


VERITAS

vl.icense (v L icense.

veri

tas.

com) is a self-service

online license

management system.
vl.iccnsc supports production license keys only. Temporary. evaluation. or
demonstration keys must be obtained through your VERITAS sales representative.
Note: The VRTSvl ic package can coexist with previous licensing packages. such
as VRTSIic.
If you have old license keys installed in /etc/vx/eIm.leave
this
directory on your system. The old and new license utilities cun coexist.

VERITAS

2-6
Copvnqht

i;

2006

Svmantec

Storage Foundation
Corporation

An fights

reserved

5.0 for UNIX. Fundamentals

,S)11Hlnt.?( .

What Gets Installed?


In version 5.0, the default installation behavior is to
install all packages in Storage Foundation Enterprise HA.
In previous versions, the default behavior was to only
install packages for which you had typed in a license key.

In 5.0, you can choose to install:


All packages included in Storage Foundation Enterprise HA
or
All packages included in Storage Foundation Enterprise HA,
minus any optional packages, such as documentation
and software development kits

VERITAS Software

Packages

When you install a product suite. the component product packages are installed
automatically. When installing Storage Foundation. be sure to follow the
instructions in the product release notes and installation guides.
Package Space Requirements
Before you install any of the packages. confirm that your system has enough free
disk space to accommodate the installation. Storage Foundation programs and files
are installed in the /. /usr. and / opt tile systems. Refer to the product
installation guides for a detailed list of package space requirements.
Solaris Note

VxFS often requires more than the default RK kernel stack size. so entries are
added to the jete/system
file. This increases the kernel thread stack size of the
system to 24K. The original / ete/ system file is copied to
/ete/fs/vxfs/system.preinstall.

Lesson 2 Installation and Interfaces


COpyrig!lt:fj 2006 Symanlec Corporation. All rights reserved

2-7

symantec

Optional Features
VERITAS FlashSnap
- Enables point-in-time copies of data with minimal
performance overhead
Features are
Included In the
VxVM package,
but they require a
separate license.

VERITAS Volume Replicator


- Enables replication of data to remote locations
- VRTSvrdoc: VVR documentation

Features

VERITAS Cluster Volume Manager


Used for high availability environments
VERITAS Quick 1/0 for Databases
Enables applications to access prealiocated
as raw character devices

are

Included In the
VxFS package,
but they require a
separate license.

Includes disk group split/join, FastResync, and


storage checkpointing (in conjunction with VxFS)

VxFS files

VERITAS Cluster File System


Enables multiple hosts to mount and perform file
operations concurrently on the same file
Dynamic Storage Tiering
Enables the support for multivolume file systems by
managing the placement of files through policies that
control both initial file location and the circumstances
under which existing files are relocated

Storage Foundation Optional Features


Several optional features do not require separate packages, only additional
licenses. The following optional features are built-in to Storage Foundation that
you can enable with additional licenses:
VERITAS Flashxnap: FlashSnap facilitates point-in-time copies of data,
while enabling applications to maintain optimal performance, by enabling
features, such as FastResync and disk group split and join functionality.
FlashSnap provides an efficient method to perform offline and off-host
processing tasks, such as backup and decision support.
VERITAS Volume Replicator: Volume Rcplicator augments Storage
Foundation functionality to enable you to replicate data to remote locations
over any IP network. Replicated copies of data can be used for disaster
recovery, off-host processing, off-host backup, and application migration.
Volume Replicator ensures maximum business continuity by delivering true
disaster recovery and flexible off-host processing.
Cluster Functionality:
Storage Foundation includes optional cluster
functionality that enables Storage Foundation to be used in a cluster
environment.
A cI/lSII!I' is a set of hosts that share a set of disks. Each host is referred to as a
node in a cluster. When the cluster functionality is enabled, all of the nodes in
the cluster can share VxVM objects. The main benefits of cluster
configurations are high availability and off-host processing.
VERITAS Cluster Server (VCS): ves supplies two major components
integral to eFS: the Low Latency Transport (LLT) package and the Group

2-8

VERITAS Storage Foundation


CQPynyht'~

2006

Syrn;'Jnlt:r.:

Corporation.

All fights

reserveu

5.0 for UNIX: Fundamentals

Membership and Atomic Broadcast (GAB) package. LLT provides nodeto-node communications and monitors network communications. GAB
provides cluster state. configuration. and membership service. and it
monitors the heartbeat links between systems to ensure that they are active.
VERIT AS Cluster File System (CFS): CFS is a shared file system that
enables multiple hosts to mount and perform II le operations concurrently
on the same file.
VERITAS Cluster Volume Manager (CVM): CVM creates the cluster
volumes necessary for mounting cluster file systems.
VERIT AS Quick 1/0 for Databases: VERITAS Quick 1/0 for Databases
(referred to as Quick 1;0) enables applications to access preallocated VxFS
tiles as raw character devices. This provides the administrative benefits of
running databases on file systems without the performance degradation usually
associated with databases created on file systems.
Dynamic Storage Tiering (DST): DST enables the support for multivolume
file systems by managing the placement of files through policies that control
both initial tile location and the circumstances under which existing files are
relocated.

2-9

Lesson 2 Installation and Interfaces


Copyrigh1

It

200{J Svmentec

Corporation.

All

nqtus

reserved.

syrnantec

Installation Menu
Storage

Foundation

and High

Version

SYMANTEC Product
Veritas
Veritas
Veritas
Veri tas
Veritas
Veritas
Veri tas
Veritas
Veri tas
Veritas

Task

Solutions
Installed

for Oracle
for DB2
for Sybase
Cluster File System
for Oracle RAe

5.0
Licensed

no
no

Cluster
Server
File System
Volume Manager
Volume Replicator
Storage Foundation
Storage Foundation
Storage Foundation
Storage Foundation
Storage Foundation
Storage Foundation

no
no
no
no
no
no
no
no
no

no
no
no
no
no

Menu:

[2>

Install/Upgrade
a Product
L) License a Product
U) Uninstall
a Product
Q) Quit

Enter

Availability

a Selection:

C) Configure an Installed
Product
P) Perform a Preinstallation
Check
0) View a Product Description
?) Help

[I,C,L,P,U,D,Q,?]

Installing Storage Foundation


The Installer is a menu-based installation utility that you can use to install any
product contained on the VERITAS Storage Solutions CD-ROM. This utility acts
as a wrapper for existing product installation scripts and is most useful when you
are installing multiple VERI rAS products or bundles, such as VERITAS Storage
Foundation or VERITAS Storage Foundation tor Databases.
Note: The example on the slide is from a Solaris platform. Some of the products
shown on the menu may not be available on other platforms. For example,
VERITAS File System is available only as part of Storage Foundation on HP-liX.
Note: The VERITAS Storage Solutions CD-ROM contains an installation guide
that describes how to use the installer utility. You should also read all product
installation guides and release notes even if you are using the installer utility.
To add the Storage Foundation packages using the installer
1

Log on as supcruscr.

Mount the VERITAS Storage Solutions CD-ROM.

utility:

Locate and invoke the installer script:


cd / cdrom/ CD_name

./installer
If the licensing utilities are installed. the product status page is displayed. This
list displays the VERITAS products on the CD-ROM and the installation and
licensing status of each product. If the licensing utilities are not installed, you
receive a message indicating that the installation utility could not cletermine
product status.

2-10

VERITAS Storage Foundation


Copynghl

&

2006

Svrnaotoc

Corporation

All flyhls

reserved

5.0 for UNIX: Fundamentals

Type I to install a product. Follow the instructions to select the product that
you want to install. Installation begins automatically.

When you add Storage Foundation packages by using the installer utility. all
packages are installed. lfyou want to add a specific package only. for example.
only the VRTSvrndoc package. then you must add the package manually from the
command line.
After installation. the installer creates three text files that can be used for auditing
or debugging. The names and locations or each file are displayed at the end or the
installation and are located in / opt/VRTS/
install
/ logs:
File

Description

Installation log file

Contains all commands executed during installation. their output.


and any errors generated by the commands: used for debugging
installation problems and for analysis by VERITAS Support

Responsefile

Contains configuration information entered during the procedure:


can be used for future installation procedures when using the
installer
script with the -responsefile
option

Summary file

Contains the output of Vf:RITAS product installation scripts:


shows products that were installed. locations of log and response
files, and installation messagesdisplayed

Methods for Adding Storage Foundation

Packages

A first-time installation or Storage Foundation involves adding the software


packages and configuring Storage Foundation fur first-time use. You can add
VERITAS product packages by using one of three methods:
Method

Command

Notes

VLRITAS
Installation Menu

installer

Installs multiple VERITAS


products interactively.
Installs packages and con ligures
Storage Foundation (or first-time
use.

Product installation
scripts

installvm

Install individual VFRITAS


products internctively.

installfs

Installs packages and configures


Storage Foundation lor first time

installsf

use.
Native operating

pkgadd

system package
installation
commands

swinstall
installp

(Solaris)

Install individual packages. for


example. when using your

(iIP-UX)

0\\'11

custom installation scripts,

(AIX)

First-time Storage Foundation


configuration must be run as a
separatestep.

rpm (Linux )
Then. to configure SF:
vxinstall

Lesson 2 Installation and Interfaces


Copynqht

2-11
"( 2006

Svmanter.

Corporation

All rights

reserved

symaru,

Configuring Storage Foundation


: When you i~~taIISt~r~g~F~~~dation,

Enclosure-Based

Naming

HostJ

"-

cl

Disk
Enclosures

y~u are asl<edify~~\'V~~tt~~~~:l

,.

II

'c'

. ~.I.
enc
-;;ncl
encO

j'

Default Disk Group


o

Standard device naming is based on


controllers, for example, cltOdOs2.

You can set up a systemwide default disk group to


which Storage Foundation
commands default if you do
not specify a disk group .
If you choose not to set a
default disk group at
installation, you can set the
default disk group later from
the command line.
Note:In StorageFoundation
4.0 and later, the rootdg
requirementno longerexists.

Enclosure-based naming is based on


disk enclosures, for example, encO.

Configuring

Storage Foundation

When you install Storage Foundation, you are asked if you want to configure it
during installation. This includes deciding whether to use enclosure-based naming
and a default disk group.
What Is Enclosure-Based Naming'!
An enclosure.or disk enclosure,is an intelligent disk array. which permits hotswapping of disks. With Storage Foundation. disk devices can be named for
enclosures rather than for the controllers through which they are accessed as with
standard disk device naming (for example. eOtOdO or hdisk2).
Enclosure-based naming allows Storage Foundation to access enclosures as
separate physical entities. By configuring redundant copies of your data on
separate enclosures, you can safeguard against failure of one or more enclosures.
This is especially useful in a storage area network (SAN) that uses Fibre Channel
hubs or fabric switches and when managing the dynamic multipathing (DMP)
feature of Storage Foundation. For example, if two paths (el t 99dO and
e2t99dO) exist to a single disk in an enclosure, VxVM can use a single DMP
metanode, such as eneO 0, to access the disk.
What Is a Default Disk Group'!
The main benefit of creating a default disk group is that Storage Foundation
commands default to that disk group if you do not specify a disk group on the
command line. defaul tdg specifies the default disk group and is an alias for the
disk group name that should be assumed if a disk group is not specified ill a
command.
VERITAS

2-12
COp,'!v;)ht

G" 2()06

Svmantec

Storage Foundation
COIl-'or<lIlOI1.

All flI)t1ls

reservcc

5.0 for UNIX: Fundamentals

Storage Foundation

Management

Server

Storage Foundation 5.0 provides central


management capability by introducing
a
Storage Foundation Management Server (SFMS).
With SF 5.0, it is possible to configure a SF host as
a managed host or as a standalone host during
installation.
A Management Server and Authentication Broker
must have previously been set up if a managed
host is required during installation.
To configure a server as a standalone host during
installation, you need to answer "n" when asked if
you want to enable SFMS Management.
You can change a standalone host to a managed
host at a later time.

Note: This course does not cover SFMS and managed hosts.

Storage Foundation Management Server


Storage Foundation 5.0 provides central management capability by introducing a
Storage Foundation Management Server (SFMS). For more information. refer to
the S/Or"ge Foundation ManagementS(,I"I(,I" Administrator's Guide.

2-13

Lesson 2 Installationand Interfaces


Copyright ; 2006 Svmantec Corporation

All fignls rese-veo

symantec

Verifying Package Installation


To verify package installation,
commands:
Solaris:
pkginfo

-1 VRTSvxvm

HP-UX:
sw1ist
-1 product
AIX:
lslpp

VRTSvxvm

-1 VRTSvxvm

Linux:
rpm -qa

Verifying

use OS-specific

VRTSvxvm

Package Installation

llyou are not sure whether VERITAS packagesare installed, or if you want to
verify which packagesare installed on the system, you can view information about
installed packagesby using Ox-specific commands to list package information.
Sularis

To list all installed packageson the system:


pkginfo
To restrict the list to installed VERITAS packages:
pkginfo

I grep

VRTS

To display detailed information about a package:


pkginfo

-1 VRTSvxvm

HP-UX

To list all installed packageson the system:


sw1ist

-1 product

To restrict the list to installed VERITAS packages:


sw1ist

-1 product

I grep

VRTS

To display detailed information about a package:


sw1ist

-1 product

2-14

VRTSvxvm

VERITAS
Cqpyngtll

(,. 2006

Symautec

Storage
Corporauon

Foundation
All nqhts

reserved

5.0 for UNIX: Fundamentals

AIX

To list all installed packages on the system:


lslpp
To restrict the list to installed VERITAS packages, type:
lslpp

-1

'VRTS*'

To verify that a particular Iileset has heen installed, use its name, for example:
lslpp

-1 VRTSvxvrn

Linux

To verify package installation on the system:


rpm -qa

I grep

VRTS

To verify a specific package installation on the system:


rpm -q[i]

package_name

For example, to verify that the VRTSvxvm package is installed:


rpm -q VRTSvxvrn
The - i option lists detailed information about the package.

Lesson 2 Installation

and Interfaces
Copyrighi ~ 2006 Svrnantec Corporation All rigl1\;; reserved

2-15

svmantec

Storage Foundation User Interfaces


Storage

Foundation

supports

three user interfaces:

VERITAS Enterprise Administrator (VEA):


A GUI that provides access through icons, menus,
wizards, and dialog boxes
Note: This course only covers using VEA on a standalone
host.

Command-Line Interface (CLI): UNIX utilities that


you invoke from the command line
Volume Manager Support Operations
(vxdiskadm): A menu-driven, text-based interface
also invoked from the command line
Note: vxdiskadm only provides access to certain disk and
disk group management functions.

Storage Foundation User Interfaces


Storage Foundation User Interfaces
Storage Foundation supports three user interfaces, Volume Manager objects
created by one interface are compatible with those created by the other interfaces.
YERITAS Enterprise Administrator
(YEA): VERITAS Enterprise
Administrator (VEA) is a graphical user interface to Volume Manager and
other VERITAS products. VEA provides access to Storage Foundation
functionality through visual clements, such as icons, menus. wizards, and
dialog boxes. Using VEA, you can manipulate Volume Manager objects and
also perform common tile system operations.
Command-Line
Interface (CLI): The command-line interface (ell) consists
of UNIX utilities that you invoke from the command line to perform Storage
Foundation and standard UNIX tasks. You can use the ell not only to
manipulate Volume Manager objects. but also to perform scripting and
debugging functions. Most of the ell commands require supcruser or other
appropriate privileges. The ell commands perform functions that range from
the simple to the complex, and some require detailed user input.
Volume Manager

Support Operations (vxdiskadm): The Volume


Manager Support Operations interface, commonly called vxdiskadm, is a
menu-driven, text-based interface that you can use for disk and disk group
administration functions. The vxdiskadm interface has a main menu from
which you can select storage management tasks.
A single VEA task may perform multiple command-line tasks.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

2-16
Copyuqtu

'c 2(1)6

Syn.autec

Corporauon

All fights

reserved

syrnaruec.

VEA: Main Window


; Menu Bart
;

t"'_ ~.,... ....~ ........


lit

; Toolbar

<_l

;::""ot.,,.._.!

t_'f-"'W""

,,:
Quick
Access
Bar

~"'br-'"

,..,,,,,-.,.,-.l~<C"'><

0
""cO._"",,"

~<W1'A<'.~ "' .".I>t.~l

_.,._.,n
~

..

loi,-,q

~ u;

">.

'>

ll"''''
,-

.."

Three ways to

:: t,.i:.,,;~~'

access tasks:

.'_

1. Menu bar

.....

2. Toolbar
3. Context menu
(right-click)

""

'_==":;":'-':'...1 ~';':"'"d,.'.'.

'" '~..,~

Using the VEA Interface


The VERITAS Enterprise Administrator (VEA) is the graphical user interface for
Storage Foundation and other VERITAS products. You can use the Storage
Foundation features of VEA to administer disks, volumes, and file systems on
local or remote machines.
VEA is a Java-based interface that consists of a server and a client. You must
install the VEA server on a UNIX machine that is running VERITAS Volume
Manager. The VEA client can Tun on any machine that supports the Java 1.4
Runtime Environment, which can be Solaris. IIP-UX, AIX, Linux, or Windows.
SOllie Storage Foundation features ofVEA

include:

Remote Administration
Security
Multiple Host Support
Multiple Views of Objects
Setting VE!\ Preferences

You can customize general VEA environment auributes through the Preferences
window (Select Tools - --Prcfcrcncev).

Lesson 2 Installation

2-17

and Interfaces
CopYllght ttl2006 Symanter. Corporation. All

rights

resetveo

symaruec

VEA: Viewing Tasks and Commands


, *-'(:~<;,<j~"
"'i:t'w.

" , 'I" ~

,.__"

'-'--""';"--+-1 ~;;~~
i':::J~~;~'~
fr<l

flnlC"

To view underlying command


lines, double-click a task.

l!<;.u,t.H~"

t>O!\k!'lIillIll

('jU! r~ltr

O<.IP

T,"~1."l
(1"~t<i.uU!<)o~

."

OI$tUi.

'oWN

,",!Jg eso

'I'+!

1(!)t;rrr", ~fnOl

~.I"!I"'I(iK<:,.,~::.'l'
UU(

kl'~-":-:O''''''''''----'--'-----'
i~~lr4"~\IIfll

Viewing Commands Through the Task Log


The Task Log displays a history of the tasks performed in the current session. Each
task is listed with properties. such as the target object of the task. the host, the start
time, the task status, and task progress.
Displaying the Task Lug window:
tab at the left of the main window.

To display the Task Log, click the Logs

Clearing the Task History: Tasks are persistent in the Task History window.
To remove completed tasks from the window, right-click a task and select
Clear All Finished Tasks.
Viewing

ell Commands:

To view the command lines executed for a task,

double-click a task. The Task Log Details window is displayed tor the task.
The CLI commands issued are displayed in the Commands Executed field of
the Task Details section.

2-18

VERITAS

Storage Foundation

COlJyflgtll '"' 2006 Symantec Coporauon All lights reservcc

5.0 for UNIX: Fundamentals

S)111.1I1lt'l.

Command-Line Interface
You can administer
prompt.

CLI commands

from the UNIX shell

Commands can be executed individually


into scripts.

or combined

Most commands are located in /usr/sbin.


Add this
directory to your PATH environment variable to
access the commands.
Examples of CLI commands

CLI commands

include:

Creates and manages volumes


Lists VxVM configuration
records
Creates and manages disk groups
Administers
disks under VM control

vxassist
vxprint
vxdg
vxdisk

are detailed in manual pages.

Using the Command-Line

Interface

The Storage Foundation command-line interface (CLl) provides commands used


for administering Storage Foundation from the shell prompt on a UNIX system.
CLl commands can be executed individually for specific tasks or combined into
scripts.
The Storage Foundation command set ranges from commands requiring minimal
user input to commands requiring detailed user input. Many of the Storage
Foundation commands require an understanding of Storage Foundation concepts.
Most Storage Foundation commands require supcruser or other appropriate access
privileges.
Accessing

Manual Pages for CLI Commands

Detailed descriptions ofVxVM and VxFS commands. the options for each utility.
and details on how to use them are located in VxVM and VxFS manual pages.
Manual pages are installed by default in / opt/VRTS/man. Add this directory to
the MANPATI I environment variable. if it is not already added.
To access a manual page. type man command name.
Examples:
man vxassist
man mount vxfs
Linux

Note

On Linux. you must also set the

Lesson

MANSECT

and

~1ANPATH

2-19

2 Installation and Interfaces


Copyrigtlt

variables.

l~,

2U06 Symamec Corpotauon All rights .csorvoo

symantcc

The

vxdi

Interface

skadm

vxdiskadm
Volume Manager Support
Menu: volumeManager/Disk
1

Add or

Encapsulate

Remove a disk

initialize

Remove a disk

Replace
List

one or more disks

one or more disks

list

Operations

for

a failed

disk

replacement
or

removed

information

Display

help

about

menu

??

Display

help

about

the

Exit

from

menuing

The vxdiskadm
Volume Manager
Manager Support
perform common
10 managing disk
VxVM objects.

system

menus

Note: This example is from a Solaris platform.


slightly different on other platforms.

Using the vxdiskadm

disk

The options

may be

Interface

command is a CLI command that you can use to launch the


Support Operations menu interface. You can use the Volume
Operations interface, commonly referred to as vxdiskadm. to
disk management tasks. The vxdiskadm interface is restricted
objects and does not provide a means of handl ing all other

Each option in the vxdiskadm interface invokes a sequence ofCLI commands.


The vxdiskadm interlace presents disk management tasks to the user as a series
of questions. or prompts.
To start vxdiskadm.
main menu.

you type vxdiskadm

at the command line to display the

The vxdiskadm main menu contains a selection of main tasks that you can use to
manipulate Volume Manager objects. Each entry in the main menu leads you
through a particular task by providing you with information and prompts. Default
answers arc provided for many questions, so you can select common answers.
The menu also contains options for listing disk information, displaying help
information. and quilling the menu interface.
The tasks listed in the main menu are covered throughout this training. Options
available in the menu differ somewhat by platform. See the vxdiskadm (1m)
manual page for more details on how to use vxdiskadm.
Note: vxdiskadm
can be run only once per host. A lock file prevents multiple
instances from running: /var / spool / locks/ . Dr SKADO. LOCK.

2-20

VERITAS

Storage Foundation

Copynqht F; ';:006 Svmamcc Corporation

All rights reserved

5.0 for UNIX. Fundamentals

Installing VEA
--_. __._---_._-server on a UNIX

I.

II
,I

Server packages:
VRTSob

..-

r-;, Install the VEA

machine running
Storage Foundation.
Install the VEA
client on any
machine that
supports the Java
1.4 Runtime
Environment (or
later).

VRTSobc33

VRTSaa

VRTSdsa

VRTSvail

VRTSvmpro

VRTSfspro

VRTSccg

Windows

Installation
administration
file (Solaris only):

'UN/X

VRTSddlpr

Client packages:
VRTSobgui, VRTSat, VRTSpbx,
VRTSicsco (UNIX)

VRTSobadmin

windows/VRTSobgui

.rosi

(Windows)

VEA is installed automatically when you run the SF installation


scripts. You can also install VEA by adding packages manually.

Managing the VEA Software


YEA consists of a server and a client. You must install the YEA server on a UNIX
machine that is running YERITAS Volume Manager. You can install the YEA
client on the same machine or on any other UNIX or Windows machine that
supports the Java 1.4 Runtime Environment (or later),
Installing the VEA Server and Client on UNIX
If you install Storage Foundation by using the Installer utility. you arc prompted to
install both the \,[A server and client packages automatically. If you did not install
all of the components by using the Installer. you can add the YEA packages
separately.
It is recommended that you upgrade YEA to the latest version released with
Storage Foundation in order to take advantage of new functionality built into YEA.
You can use the YEA with 4.1 and later to manage 3.5.2 and later releases.
When adding packages manually. you must install the Volume Manager
(VRTSvl ie. VRTSvxvrn) and the infrastructure packages (VRTSat. VRTSpbx.
VRTSieseo) before installing the YEA server packages. After installation. also
add the YEA startup scripts directory. / opt/VRTSob/bin.
to the PATH
environment variable.

Lesson 2 Installation and Interfaces


Copyright ,,2006 Symanrec Corporation. Anuqhts resorvoo

2-21

syrnanrec

Starting the VEA Server and Client


Once installed, the VEA server starts up automatically
at system startup.
To start the VEA server manually:
1. Log on as superuser.
2.

Start the VEA server by invoking the server program:


/opt/VRTSob/bin/vxsvc

(on Solaris and HP-UX)

/opt/VRTSob/bin/vxsvcctrl

(on Linux)

When the VEA server is started:


/var /vx/ isis/vxis
is. lock ensures that only one instance
of the VEA server is running.
/var/vx/isis/vxisis
.log contains server process log
messages.
To start the VEA client:
On UNIX: /opt/VRTSob/bin/vea
On Windows: Select Start->Programs->VERIT
VERITAS Enterprise Administrator.

AS->

Starting the VEA Server


In order to use YEA. the YEA server must be running on the UNIX machine to be
administered. Only one instance of the VEA server should be running at a time.
Once installed. the YEA server starts up automatically at system startup. You can
start the YEA server manually by invoking vxsvc (on Solaris and HP-UX).
vxsvcctrl
(on Linux ), or by invoking the startup script itself, for example:
Solaris

/etc/rc2.d/S73isisd

start

~IP-LJX

/sbin/rc2.d/S700isisd

start

The YEA client call provide simultaneous access to multiple host machines. Each
host machine must be running the VEA server.
Note: Entries for your user name and password must exist in the password file or
corresponding Network Information Name Service table on the machine to be
administered. Your user name must also be included in the YERITAS
administration group (v r t s adm, by default) in the group tile or NIS group table.
If the vrtsadm entry does not exist. only root can run YEA.
You can contigure YEA to connect automatically to hosts when you start the YEA
client. In the YEA main window. the Favorite Hosts node can contain a list of
hosts that arc reconnected by default at the startup of the YEA client.

VERITAS

2-22
Copyright

.; 2006

Svrnantec

Storage
Corpcrahon.

AU

Foundation
nqnts

reserved

5.0 for UNIX: Fundamentals

symanrec.

Managing VEA
The VEA server program is:
/opt/VRTSob/bin/vxsvc
(Solaris and HP-UX)
/opt/VRTSob/bin/vxsvcctrl

(Linux)

To confirm that the VEA server is running:


vxsvc -m (Solaris and HP-UX)
vxsvcctrl

status

(Linux)

To stop and restart the VEA server:


/etc/init.d/isisd
restart
(Solaris)
/sbin/init.d/isisd

restart

(HP-UX)

To kill the VEA server process:


vxsvc -k (Solaris and HPUX)
vxsvcctrl

stop

(Linux)

To display the VEA version number:


vxsvc

-v

vxsvcctrl

(Solaris and HP-UX)


version

(Linux)

Managing the VEA Server


Monitoring VEA Event and Task Logs
You can monitor VEA server events and tasks from the [vent Log and Task Log
nodes in the VEA object tree. You can also view the VEA log file. which is located
at /var /vx/ isis/vxisis.
log. This tile contains trace messages for the V[A
server and VEA service providers.

Lesson 2 Installation and Interfaces


Copylight <E2006 Symantec Corporation Anucnts reserved

2-23

symantcc

Lesson

Summary

Key Points
In this lesson, you learned guidelines for a firsttime installation of VERITAS Storage Foundation,
as well as an introduction to the three interfaces
used to manage VERITAS Storage Foundation.
Reference Materials
- VERITAS Volume Manager Administrator's Guide
- VERITAS Storage Foundation Installation Guide
- VERITAS Storage Foundation Release Notes
- Storage Foundation Management Server
Administrator's Guide

Lab 2
Lab 2: Installation

and Interfaces

In this lab, you install VERITAS Storage


Foundation 5.0 on your lab system. You also
explore the Storage Foundation user interfaces,
including the VERITAS Enterprise Administrator
interface, the vxdiskadm menu interface, and the
command-line interface.

For Lab Exercises, see Appendix A.


For Lab Solutions, see Appendix B,

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions, "lab ::: lnstatl.uiou and
hucrruccs." I'a~l' ,\ ..7
Appendix B provides complete lab instructions and solutions, "Lab 2 Solutiuns:
lnstullation and lnrcriucc-." page n 7
2-24

VERITAS Storage Foundation


Copynqt-t ( 20DI'} Svrnantec Lorpcratton

All riqhts leserved

5,0 for UNIX: Fundamentals

Lesson 3
Creating a Volume and File System

svmantec

Lesson Introduction

Lesson 1: Virtual Objects

Lesson 2: Installation

and Interfaces

Lesson 3: Creating a Volume and File


System

Lesson 4: Selecting

Lesson 5: Making Basic Configuration


Changes

Lesson 6: Administering

Lesson 7: Resolving

~~'"

"",',

Volume Layouts

File Systems

Hardware Problems

,~,"';'~;i!1I
..

symantcc

Lesson Topics and Objectives


After completing this lesson, you will be
able to:

Topic
Topic 1: Preparing Disks and
Disk Groups for Volume
Creation

Initialize an OS disk as a VxVM disk and


create a disk group by using VEA and
command-line utilities.

Topic 2: Creating a Volume

Create a concatenated volume by using VEA


and from the command-line,

Topic 3: Adding a File System


toa Volume

Add a file system to and mount an existing


volume.

Topic 4: Displaying Volume


Configuration Information

Display volume layout information by using


VEA and by using the vxprint command.

Topic 5: Displaying Disk and


Disk Group Information

View disk and disk group information


identify disk status.

Topic 6: Removing Volumes,


Disks, and Disk Groups

Remove a volume, evacuate a disk, remove a


disk from a disk group, and destroy a disk
group.

and

VERITAS Storage Foundation 5,0 for UNIX: Fundamentals

3-2
Cop;lfIyhl'~

2006

Svrnantec

COrpOI(ltloll

All rights

reserved

."-*..,,.

,.dt~., d"U1I;:M

Selecting a Disk Naming Scheme


Types of naming schemes:
Traditional device naming: OS-dependent
physical connectivity
information

and based on

Enclosure-based
naming: OS-independent,
based on the
logical name of the enclosure, and customizable

You can select a naming scheme:


When you run Storage

Using vxdiskadm,

Enclosure-based
categories:

Foundation

"Change

installation

the disk naming

scripts
scheme"

named disks are displayed in three

Enclosures: enclosurenarne #
Disks: Disk #
Others: Disks that do not return a path-independent identifier
to VxVM are displayed in the traditional OS-based format.

Preparing Disks and Disk Groups for Volume CreationHere are some examples of naming schemes:
Naming Scheme

Example

Traditional

Solaris: /dev/ l r l dsk/clt9dOs2


HP-UX: /dev/ l r ] dsk/c3t2dO (no slice)
AIX: /dev/hdisk2
I.inux: /dev/sda. /dev/hda

Enclosure-based

senaO- 1. senaO- 2,senaO- 3..

Enclosure-based Customized

englab2.hrl.boston3

Benefits of enclosure-based naming include:


Easier fault isolation: Storage Foundation can more effectively
metadata to ensure data availability.

place data and

Device-name independence: Storage Foundation is independent of arbitrary


device names used by third-party drivers.
Improved SAN management: Storage Foundation can create better location
identification information about disks in large disk limns and SANs.
Improved cluster management: In a cluster environment.
on all hosts in a cluster can be the same.

disk array names

Improved dynamic multipathing


(DMP) management: With multipathcd
disks. the name of a disk is independent of the physical communication paths.
avoiding confusion and conflict.

Lesson 3 Creating a Volume and File System


Copyrighl;~ 2006 Symantec Corporation. All nqtus reserved

3-3

symantec

Stage 1:
;
Initialize disk. J

Stage 2:

Uninitialized
Disk

disk
! Assign
to disk group.
:
;

Before Configuring

a Disk for Use by VxVM

In order to use the space ofa physical disk to build VxVM volumes, you must
place the disk under Volume Manager control. Before a disk can be placed under
volume Manager control, the disk media must be formatted outside ofVxVM
using standard operating system formatting methods. SCSI disks arc usually
prcformaued. After a disk is formatted. the disk can be initialized for use by
Volume Manager. In other words. disks must be detected by the operating system,
before VxVM can detect the disks.
Stage One: Initialize a Disk
A formatted physical disk is considered un initialized until it is initialized for use
by VxVM. When a disk is initialized. the public and private regions are created.
and VM disk header information is written to the private region. Any data or
partitions that may have existed on the disk are removed.
These disks are under Volume Manager control but cannot be used by Volume
Manager until they are added to a disk group.
Note: Encapsulation is another method of placing a disk under VxVM control in
which existing data on the disk is preserved. This method is covered in a later
lesson.
Changing the Disk Layout
To display or change the default values that are used for initializing disks, select
the "Change/display the default disk layouts" option in vxdiskadm:

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

3-4
COPYright

,~, 2006

Svmantec

Cornorauon

All fights

reserved

For disk initialization. you can change the default format and the default length
of the private region. If the attribute settings for initializing disks are stored in
the user-created tile. / etc/ defaul t /vxdi s k, they apply to all disks to be
initialized
On Solaris for disk encapsulation. you can additionally change the offset
values for both the private and public regions. To make encapsulation
parameters different from the default VxVM values. create the user-detined
/ etc/ defaul t /vxencap tile and place the parameters in this tile.
On HP-UX when converting LVM disks. you can change the default format
and the default private region length. The attribute settings are stored in the
/etc/default/vxencap
file.
Stage Two: Assign a Disk to a Disk Group
When you add a disk to a disk group. VxVM assigns a disk media name to the disk
and maps this name to the disk access name.
Disk media name: A disk media name is the logical disk name assigned to a
drive by VxVM. VxVM uses this name to identify the disk for volume
operations. such as volume creation and mirroring.
Disk access name: A disk access name represents all UNIX paths to the
device. A disk access record maps the physical location to the logical name
and represents the link between the disk media name and the disk access name.
Disk access records arc dynamic and can be re-created when vxdctl enable
is run.
The disk media name and disk access name. in addition to the host name. are
written to the private region of the disk. Space in the public region is made
available for assignment to volumes. Volume Manager has full control of the disk.
and the disk can be used to allocate space tor volumes. Whenever the VxVM
configuration daemon is started (or vxdctl enable is run). the system reads the
private region on every disk and establishes the connections between disk access
names and disk media names.
A tier disks are placed under Volume Manager control. storage is managed in terms
of the logical configuration. File systems mount to logical volumes. not to physical
partitions. Logical names. such as
/dev/vx/ l r l dsk/diskgroup/volume_name.
replace physical locations.
such as /dev/ [rl dsk/ device_name.
The free space in a disk group refers to the space on all disks within the disk group
that has not been allocated as subdisks, When you place a disk into a disk group.
its space becomes part or the tree space pool of the disk group.
Stage Three: Assign Disk Space to Volumes
When you create volumes. space in the public region of a disk is assigned to the
volumes. Some operations. such as removal of a disk from a disk group. are
restricted itspace on a disk is ill use by a volume.

Lesson

3 Creating

3-5

a Volume and File System


Copyright

c 2006

Symantcc

Corporation

All rignls

reserved

sym.uuec

Disk Group Purposes

Disk groups enable


you to:

sysdg

L[~~r
LiJgill
:~=:Ifqfi
VM disks

[j[j[j[j
VM disks

[j [j

[j [j [j [j

[j [j

VM disks

VM disks

Group disks into logical


collections for a set of
users or applications.

Easily move groups of


disks from one host to
another.

Ease administration of
high availability
environments through
deport and import
operations.

What Is a Disk Group?


A disk group is a collection of physical disks, volumes, plexes, and subdisks that
are used for a common purpose. A disk group is created when you place at least
one disk in the disk group. When you add a disk to a disk group. a disk group entry
is added to the private region header of that disk. Because a disk can only have one
disk group entry in its private region header. one disk group does not "know
about" other disk groups, and therefore disk groups cannot share resources, such as
disk drives, plexes, and volumes.
A volume with a plcx can belong to on ly one disk group. and subdisks and plexes
ofa volume must be stored in the same disk group. You can never have an "empty"
disk group, because you cannot remove all disks from a disk group without
destroying the disk group.
Why Are Disk Groups Needed?
Disk groups assist disk management in several ways:
Disk groups enable the grouping of disks into logical collections for a
particular set of users or applications.
Disk groups enable data. volumes. and disks to be easily moved from one host
machine to another.
Disk groups ease the administration of high availability environments. Disk
drives can be shared by two or more hosts. but they can be accessed by only
one host at a time. I f one host crashes. the other host can take over its disk
groups and therefore its disks.

3-6

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynght@ 2006 Svmantec Corporanon

All flghLS reservec

System-Wide
Reserved
names:
bootdg
defaultdg
nodg

Reserved Disk Groups


I System

[iJ LEi8J
acc~

bootdg sysdg
defaultdg
acctdg

mJ[j

EJEH"HJI

1::---'1
I System B
bootdg nodg
defaultdg
nodg
noc1g
1s the default value tor
bootdg

and defaultdg.

To display what is set as bootdg or defaul tdg:


vxdg bootdg
vxdg defaultdg
To set the default disk group after VxVMinstallation:
vxdctl defaultdg
diskgroup

System-Wide Reserved Disk Groups


VxVM has reserved three disk group names that are used to provide boot disk
group and default disk group functionality. The names "bootdg,
.,
"de f au I tdg, " and "nodq" arc system-wide reserved disk group names and
cannot be used as names for any of the disk groups that you set up.
If you choose to place your boot disk under VxVM control. VxVM assigns
bootdg as an alias for the name of the disk group that cuntains the volumes that
are used to boot the system.
deaultdg is an alias for the disk group name that should be assumed if the -g
option is not specified to a command. You can set defaul
tdg when you install
VERITAS Volume Manager or anytime alter installation.
By default. both bootdg

and defaul

tdg

arc set to nodg.

Notes
The definitions ofbootdg
The definition ofbootdg
in /dev/vx/dsk

and defaul
tdg are written to the volboot
file.
results in a symbolic link from the named bootdg

and /dev/vx/rdsk.

The rootdg
disk group name is no longer a reserved name for VxVM
versions after 4.0. If you arc upgrading from a version ofVolume Manager
earlier than 4.0 where the system disk is encapsulated in the rootdg
disk
group, the bootdg
is assigned the value of rootdg
automatically.

Lesson 3 Creatinga Volumeand FileSystem


Copyright'f

2006

Symantec

3-7
Corporation

All rights

reserved

syrnantcc

To create a disk group, you add a disk to a disk group.

You can add a single disk or multiple

You cannot add a disk to more than one disk group.

disks.

Default disk media names vary with the interface used to


add the disk to a disk group, but they are conventionally
in the format diskgroup##, such as datadgOO,
datadgOl, and so on.

Disk media names must be unique within a disk group.

Adding a disk to a disk group makes the disk space


available for use in creating Volume Manager volumes.

Creating a Disk Group


A disk must be placed into a disk group before it can be used by VxVM. A disk
group cannot exist without having at least one associated disk. When you create a
new disk group. you specify a name for the disk group and at least one disk to add
to the disk group. The disk group name must be unique for the host machine.
Adding Disks
To add a disk to a disk group, you select an un initialized disk or a free disk. If the
disk is uninitializcd, you must initialize the disk before you can add it to a disk
group.
Disk Naming
When you add a disk to a disk group, the disk is assigned a disk media name. The
disk media name is a logical name used tor VxVM administrative purposes.
Notes on Disk Naming
You can change disk media names after the disks have been added to disk groups.
However. if you must change a disk media name, it is recommended that you make
the change before using the disk for any volumes. Renaming a disk does not
rename the subdisks on the disk, which may be confusing.
Assign logical media names. rather than use the device names. to facilitate
transparent logical replacement of failed disks. Assuming that you have a sensible
disk group naming strategy, the VEA or vxdiskadm default disk naming scheme
is a reasonable pol icy to adopt.

3-8

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYright ~ 2006 Symantec Corporation. All rights reserveo

Create a disk group or add disks using vxdiskadm:


"Add or initialize
one or more disks"
Initialize disks:
vxdisksetup

-i

device_tag

vxdisksetup
vxdisksetup
vxdisksetup
vxdisksetup

-i
-i
-i
-i

Disk 1 (Enclosure-based naming)


c2tOdO (Solaris and HP-UX)
hdisk2 (AIX)
sda2 (Linux)

[attributes)

Initialize the disk group by adding at least one disk:


vxdg

init

vxdg init

diskgroup

disk_name=device_tag

datadg datadgOl=Disk_l

Add more disks to the disk group:


vxdg -g diskgroup

adddisk

vxdg -g datadg adddisk

disk_name=device_tag

datadg02=Disk_2

Creating a Disk Group: vxdiskadm


From the vxdiskadm main menu, select the "Add or initialize one or more disks"
option. Specify the disk group to which the disk should be added. To add the disk
to a new disk group, you type a name for the new disk group. You use this same
menu option to add additional disks to the disk group.
To verify that the disk group was created, you can use vxdisk

list.

When you add a disk to a disk group, the disk group configuration is copied onto
the disk, and the disk is stamped with the system host !D.

Lesson 3 Creating a Volume and File System


Copyright

,~ 2006

Symantec

3-9
Corporation.

All rights

reserved

svrnantcc

Ei1

New Disk Group Wizard


Ercet e unique neme for this di::.f 9rouP, and then select

.5iiI~
.11

the disk: to include

.-~====~-------,
Av~iIabIe disks:

'3 Qts~_O
<3Dtsk_,

Gi DIS'_S
. oJ:;i Dts,_6
~Olsk

Actions->Add

Add >

..!.l

Add AI

..:..I

ir":~'~:~~:~""
11
Remo'w'e AI

Disk to Disk Grou~~

Creating a Disk Group: VEA


Select:

Disk Groups folder, or a free or uninitialized disk

Navigatlun path:

Acrions=-c-Ncw Disk Group

Input:

Group Name: Type the name of the disk group to be created.


A vailable/Selected disks: Select at least one disk to be placed in
the new disk group.
Disk names: To speci ty a disk media name for the disk that you
arc placing in the disk group. type a name in the Disk name field.
Itno disk name is specified. VxVM assigns a default name. If you
arc adding multiple disks and specify only one disk name. VxVM
appends numbers to create unique disk names.
Organization Principle: In an Intelligent Storage Provisioning
(ISP) environment. you can organize the disk group based on
policies that you setup. This option is covered in a later lesson.
Comment:

Any user comments

Create cluster group: Displayed on HP-UX plauorms: to create


a shared disk group. mark this check box: only applicable in a
cluster environment.
Activation 'lode: Displayed on HP-UX platforms: applies to
cluster environments: possible values are on: Read write. Read
only: the default selling is Read write for non-cluster
environments.
Notc: When working

in a SAN environment.

or any environment

in which

VERITAS Storage Foundation 5,0 for UNIX: Fundamentals

3-10
Copynght':;.

;WOf) Symantec copcreuou.

All rights

reserved

multiple hosts may share access to disks. it is recommended that you perform a
rescan operation to update the YEA view of the disk status before allocating any
disks. From the command line, you can run vxdctl enable.
Adding a Disk: VEA
A free or unuutinlized disk

Select:
Navlgation path:

Aclions->AJd

Input:

Disk Group name: Select an existing disk group.

Disk

10

Disk Group

New disk group: Click the ~ew disk group button


to a new disk group.

10

add the disk

Select the disk to add: You can move disks between the Selected
disks and Available disks fields by using the Add and Remove
buttons.

Disk Name(s): By default. Volumc Manager assigns a disk media


name that is basedon the disk group name of"a disk. You can
assign a different name to the disk by Iyping a name in the Disk
namcts) field. lfyou are adding more than one disk. place a space
between each name in the Disk nametsj field.
Comment: Any user comments

When the disk is placed under YxYM control. the Type property changes to
Dynamic. and the Status property changes to Imported.

Lesson

3 Creating

a Volume and File System


Copyright,f;; 2006 swnemec Corporaunn. All rigtlts reserved.

3-11

syrnantec

Creating

a Volume:

CLI

To create a volume:
vxassist
length

-g

diskgroup

make

volume

name

[attributes]

For example:
vxassist

-g

datadg

make

datavol

100m

Block and character (raw) device files are set up that


you can use to access the volume:
Block device file for the volume:
/dev/vx/dsk/diskgroup/volume_name

Character device file for the volume:


/dev/vx/rdsk/diskgroup/volume_name

To display volume attributes, use:


vxassist

-g

diskgroup

help

showattrs

Creating a Volume
Creating a Volume
When you create a volume using VEA or ell

commands, you indicate the desired

volume characteristics, and VxVM creates the underlying plexcs and subdisks
automatically. The VxVM interfaces require minimal input if you use default
settings. For experienced users, the interfaces also enable you to enter more
detailed specifications

regarding all aspects of volume creation.

Before You Create a Volume


Before you create a volume. ensure that you have enough disks to support the
layout type.
A striped volume requires at least two disks.
A mirrored volume requires at least one disk for each plex. A mirror cannot be
on the same disk that other plcxes of the same volume arc using.

Creating a Volume: CLI


To create a volume from the command line. you use the vxassist

command.

In

the syntax:
Use the -g option to specify the disk group in which to create the volume.
make is the keyword
volume_name
length

for volume creation.

is a name you give to the volume. Specify a meaningful

name.

specifies the number of sectors in the volume, You can specify the

length by adding an rn, k. g, or t to the length.

VERITAS Storage Foundation 5.0 for UNIX. Fundamentals

3-12
Copynqbt

: 2(106

Symal118C:couorauon.

All fights

resorvec

synuuucc

Creating a Volume: VEA: Assigning Disks


r

! Actions->New

Volume

, Step 1: Select disks to use for the volume.

Name

..J Controllers
iil.9 0:3
,.j

UIS'S

c3t2 .. ,

16.9 ..

c:3tJ .. ,

16.9 ..

--=:..J

Excluded:

~ Enclosures
...... 31 DISk

r Mirror Across:

::J

1 .

r stripe Across: I"'I-"--::J--'


r Ordered

Creating a Volume: VEA


Select:
Navigation

A disk group
path:

Actions->:--Jcw

Volume

Disks for this volume: Let VxVM decide (detuult). or manually


select disks to use.

Input:

Volume attributes: Specify a volume name. the size of the


volume. the type of volume layout. and other layout
characteristics. Assign a meaningful name to the volume that
describes the data stored in the volume.
File system: Create a tile system on the volume and set file
system options.
New Volume

Wizard

Step I: Assigning

Disks to Use for a New Volume

By default. VxVM locates available space on all disks in the disk group and
assigns the space to a volume automatically
based on the layout you choose.
Alternatively.
you can choose specific disks. mirror or stripe across controllers.
trays. targets. or enclosures. or implement ordered allocation. Ordered allocation
a method of allocating disk space to volumes based on a specific set "fVxVM
rules.

Lesson 3 Creating a Volume and File System


Copyriglll

1'. 2006

Svmantec

is

3-13
Corporation

All rigtlts

reserved

svmuntcc

Creating a Volume: VEA: Setting Volume


Attributes
_
S~te~p~2~:
~s~p~e~C~ifY~V~O~lu~m~e~a~tt~r~ib~u~te~s~'~==::::__
---j Default options
change based on
\!olume name: :d~la'~IOli
the layout type
comment
i:.~";::::"~~":"""""""""""""""""""""""""""""
you setect.

.J
~

Si<e:

:: . . ... _

_i. GB

Layout

.1 I

Ma~ Size:
Mlrrorln!o

n Mirrored

I!..Concatenatell

o ~triped
C; BAlO-S

n Enable

,.~, CQncatenated Mirrored

'..1

() Strined Mirrored

New Volume Wizard Step 2: Specifying

[astResl'11c

jnnialtze zero

Attributes

for a New Volume

Volume name: Assign a meaningful name to the volume that describes the data
stored in the volume.
Size: Specify a size for the volume. The default unit is GB. If you click the Max
Size button, YxYM determines the largest size possible for the volume based on
the layout selected and the disks to which the volume is assigned.
Select a size for the volume based on the volume layout and the space available
in the disk group. The size of the volume must be less than or equal to the
available tree space on the disks.
The size specified in the Size field is the usable space in the volume. For a
volume with redundancy (RAID-5, mirrored), YxYM allocates additional tree
space for the volume's parity information (RAID-5) or additional plexcs
(mirrored).
The free space available for constructing a volume of a specific layout is
generally less than the total free space in the disk group unless the layout is
concatenated or striped with no mirroring or logging.
Layout: Select a layout type from the group of options. The default layout is
concatenated.
Concatenated: The volume is created using one or more regions of specified
disks.
Striped: The volume is striped across two or mort: disks. The default number
uf columns across which the volume is striped is two. and the default stripc
unit size is 128 sectors (64K) on Solaris, AIX. and l.inux: 128 sectors (128K)
on HP-UX. You can specify different values,
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

3-14
Copynqbt

- ;:OG6Svrnantec Corporation. All rlghlS ieeesvea

Concatenated Mirrored

and Striped Mirrored:

These options denote

layered volume layouts.


Mirror

Info:

Mirroring is recommended, To mirror the volume, mark the


Mirrored check box. Only striped or concatenated volumes can be mirrored.
RAID-5 volumes cannot he mirrored.
Total mirrors: Type the total number of mirrors for the volume. A volume can
have up to 32 plcxcs: however. the practical limit is 31. One plex is reserved by
YxYM to perform restructuring or relocation operations.
Mirrored:

Enable logging: To enable logging. mark the Enable logging check box. If you
enable logging. a log is created that tracks regions of the volume that are currently
being changed by writes. In case ofa system failure. the log is used to recover only
those regions identified in the log.

YxYM creates a dirty region log or a RA\D-5 log. depending on the volume
layout. If the layout is RAID-5. logging is enabled by default. and YxYM adds an
appropriate number of logs to the volume.
Enable Fastltesync: To enable FastResync. mark the Enable FastResync check
box. This option is displayed only if you have licensed the FastResync option.
Initialize zero: To clear the volume before enabling it for general use. mark the

Initialize zero check box. For security purposes. you can use the Initialize Zero
option to overwrite all existing data in the volume area. However, this is time
consuming due to all the space that has to be written.
No layered volumes: To prevent the creation of a layered volume, mark the No

layered volumes check box. This option ensures that the volume has a nonlaycred
layout. I f a layered layout is selected. this option is ignored.

Lesson 3 Creating

3-15

a Volume and File System


Copyriqht

ts" 2006

Symilntec: Corporanoo

AI;

right., rcsprvcri

symantec

Creating a Volume: VEA: Adding a File System


Durin Volume Creation
r No file system
r. Create a fie system I

~--------~------~~

Create OptIons
'-1,,'-;-' ".,-"
6Iockslze:

-::J'~
::J

IDefault (I "

MountOptions
Mountp ... r:11-rnn-w-' 1-----

P Create mount posit

r Read only
P Honor setuid

P i\<M=t9f;;;;:~g~lrit~

r Mountat boot
fsck pess:
Now F'. System Det ...

New Volume

Wizard

Step 3: Creating

Mount file System Det"'I s.,;

a Snapshot

Cache Volume

A storage cache may be named and shared among several volumes in the same
disk group, This is used only for point-in-time copies.
New Volume

Wizard

Step 4: Creating

a File System on a New Volume

When you create a volume, you can place a file system on the volume and specify
options for mounting

the tile system. You can place a tile system on a volume

when you create a volume or any time after creation.


The default option is "No tile system:' To place a tile system on thc volume, select
the "Create a tile system" option and specify:
File system type: Specify the tile system type as either vxfs (VERITAS File
System: or other OS-supported file system types (U FS on Solaris; I-IFS on H PUX; on AIX. JFS and .IFS2 arc not supported on VxVM volumes), To add a
VERITAS

tile system. the VxFS product must be installed with appropriate

licenses,
Create Options:
Compress: If your platform supports tile compression. this option
compresses the files on your tile system (not available on Solaris/HP-UX).
Allocation
unit or Block size: Select an allocation unit size (for OSsupported tile system types): or a block size (for VxFS tile systems).
New File System Details: Click this button to specify additional tilesystem-specific mkf s options. For VxFS, the only explicitly available
additional options are large file support and log size. You can specify other
options in the Extra Options field,

3-16

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYrlghl'.," 2006 Svrnantec Corporation

All fight':. reservcc

Mount Options:
Mount point: Specify the mount point directory on which to mount the file
system. The new file system is mounted immediately after it is created.
Leave this field empty if you do not want to mount the tile system.
Create mount point: Mark this check box to create the directory il' it does
not exist. The mount point must be specified.
Read only: Mark this check box to mount the file system as read only.
Honor setuid: Mark this check box to mount the file system with the suid
mount option. This option is marked by default.
Add to tile system table: Mark this check box to include the file system in
the /etc/vfstab
file (Solaris). the /etc/fstab
file (IIP-UX. Linux).
or the /etc/filesystems
tile (AIX).
Mount at boot: Mark this check box to mount the file system
automatically whenever the system boots. This option is not displayed on
IIP-UX.
fack pass: Specify how many fsck passes will be run if the file system is
not clean at mount time.
Mount File System Details: Click this button to specify additional mount
options. For YxFS. the explicitly available additional options include
disabling Quick 110. setting directory permissions and owner. and setting
caching policy options. You can specify other options. such as quota. in
the Extra options field.

Lesson

3 Creating

3-17

a Volume and File System


Copynght if' 2006 Symantec Corporation

All right~ res.erved

syrnanicc.

Adding a File System After Volume Creation


1. CLI: Create the file system using mkf s (VxFS) or
OS-specific

file system creation commands.

VEA: Select Actions->File


2.

System->New

ell: Create a mount point directory


the file system.

File System

on which to mount

VEA: Specify the mount point in the New File System


dialog box.

3. CLI: Mount the volume to the mount point by using the


mount command.
VEA: If a file system was previously created on a
volume, but not mounted, you can explicitly mount the
file system by selecting Actions->File
System->
Mount File System.
Solaris

HP-UX

~I~.J.

LinuxJ

Adding a File System to a Volume


A file system provides an organized structure to facilitate the storage and retrieval
of files. You can add a tile system to a volume when you create a volume or any
time after you create the volume initially.
When a tile system has been mounted on a volume. the data is accessed
through the mount point directory.
When data is written to files. it is actually written to the block device tile:
/dev/vx/dsk/disk_group/volume_name.

When fsck is run on the file system. the raw device tile is checked:
/ dev/vx/ rdsk/ disk _group/

vol ume_ name.

Adding a File System to a Volume: CLI


To add a file system to a volume from the command line. you must create the tile
system. create a mount point for the tile system, and then mount the tile system.
Solaris
To create and mount a YxFS tile system:
mkfs

-F vxfs

mkdir

/data

mount

-F vxfs

/dev/vx/rdsk/datadg/datavol

/dev/vx/dsk/datadg/datavol

/data

To create and mount a UFS file system:


newfs

/dev/vx/rdsk/datadg/datavol

mkdir

Idata

VERITAS

3-18
COpyflglll~

2006

Symantec

Storage
Corporation

Foundation
All rights

ro servoo

5.0 for UNIX: Fundamentals

mount

/dev/vx/dsk/datadg/datavol

/data

HI'-lIX

To create and mount a YxFS file system:


mkfs

-F

vxfs

mkdir

/data

mount

-F vxfs

/dev/vx/rdsk/datadg/datavol

/dev!vx/dsk/datadg/datavol

/data

To create and mount an IIFS file system:


newfs

-F

mkdir

/data

hfs

mount

-F hfs

/dev/vx/rdsk/datadg/datavol

/dev/vx/dsk/datadg/datavol

/data

AIX

To create and mount a YxFS file system using mkfs:


mkfs

-v

vxfs

mkdir

/data

mount

-v

vxfs

/dev/vx/rdsk/datadg/datavol

/dev/vx/dsk/datadg/datavol

/data

To create and mount a YxFS file system using crfs:


crfs
yes

-v

vxfs

-d

/dev/vx/rdsk/datadg/datavol

-m /data

-A

Notes:
An uppercase V is used with mkfs: a lowercase v is used with crfs (to avoid
conflict with another crfs option).
crfs creates the file system. creates the mount point. and updates the file
systems file (/etc/filesystems).
The -A yes option requests mount at
boot.
If the file system already exists in /etc/filesystems.
you can mount the
file system by simply using the syntax: mount
mount_point.
Linux

To create and mount a YxFS file system using mkfs:


mkfs

Lesson

-t

vxfs

mkdir

/data

mount

-t

3 Creating

vxfs

/dev/vx/rdsk/datadg/datavol

/dev/vx/dsk/datadg/datavol

a Volume and File System


Copyright rr 2006 Sjrnantec Co-potation All riglns reserved

/data

3-19

syrnarucc.

Mounting a File System at Boot


To mount the file system automatically at boot time, edit
the OS-specific file system table file to add an entry for
the file system.
Specify

information,

Device to mount:

such as:
/dev/vx/dsk/datadg/datavol

Device to f sck:

/dev/vx/rdsk/datadg/datavol

Mount point:

/data
vxfs

File system type:

fsck

Mount at boot:

pass:

Mount Options

Mount pOlnt,"C":Jd-cala---f

yes

~ Create mount ~oinl

Mount options:

C Read
'"

only

Honor setujd

Mounting a File System at Boot


Using CLI. if you want the tile system to be mounted at every system boot, you
must edit the tile system table file by adding an entry for the file system. If you
later decide to remove the volume. you must remove the entry in the tile system
table tile.
Platform

File System Tahle File

Sularis

/etc/vfstab

HI'-UX

/etc/fstab

AIX

letc/filesystems

l.inux

/etc/fstab

AIX
In AIX, you can use the following
table tile, /etc/filesystems:
To \ Jew entnes lsfs

commands when working with the tile system

mount po i nt:

To change details of an entry, use chf s. For example. to turn off mount at
boot chfs -A no mount po i nt:
111 YEA. in the Mount File System dialog. mark the "Add to tile system table" and
"Mount at boot" (not on HP-UX) check boxes. the entry is made in the file system
table tile automatically. Ifthe volume is later removed through YEA. its
corresponding tile system table tile entry is also removed automatically.

3-20

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYJlyht & 2006 Svmantec Corporation. All nqlns reserved

S)111'l!1t('(

Displaying Volume Information: CLI


To display volume configuration information:

vxprint

-g diskgroup

[options]

-vpsd

Select only volumes (v), plexes (p), subdisks (s),


or disks (d).

-h

List hierarchies below selected records.

-r

Display related records of a volume containing


subvolumes.

-t

Print single-line output records that depend upon


the configuration record type.

-1

Display all information from each selected record.

-a

Display all information about each selected


record, one record per line.

-A

Select from all active disk groups.

-e pattern

Show records that match an editor pattern.

Displaying Volume Configuration

Information

Displaying Volume Layout Information: ell

The vxprint

Command

You can use the vxpr int command to display information about how a volume is
configured. This command displays records from the VxVM configuration
database.

vxprint

-g diskgroup

[options]

The vxpr int command can display information about disk groups. disk media.
volumes. plcxes, and subdisks. You can specify a variety of options with the
command to expand or restrict the information displayed. Only some of the
options are presented in this training. For more information about additional
options. see the vxprint (Lm) manual page.

Lesson

3 Creating

3-21

a Volume and File System


Copvnqbt ;f' 2006 svroeruec co.oorauco

All rights reserved

Displaying
Volume Information: ell
- - - ----.--~--.--.-- - -- - -~-.------.
...

...

.. ...

vxprint

.. ....

-9 datad9

.. ..

-ht

more

OG

NAME

NCONFlG

NLOG

MINORS

ST
OM

NAME
NAME

STATE
DEVICE

DM CNT
TYPE

SPARE CNT
PRIVLEN
PUBLEN

RV

NAME

I<STATE

RL
CO

NAME

RLINK_CNT
RVG
CACHEVOL

KSTATE
KSTATE

STATE
STATE
STATE

SD NAME
SV NAME

PLEX
PLEX

DISI<
VOLNAME

DISKOFFSLENGTH
NVOLLAYRLENGTH

[COLI] OFF DEVICE


[COL/JOFF
AM/NM

SC NAME

PLEX

CACHE

DISKOFFS

[COLI

DC

NAME

P.ARENTVOL

LOGVOL

SP

NAME

SNAPVOL

OCO

NAME

GROUP-ID

PRIMARY

APPVOL_ CNT
STATE
DATAVOLS
SRL

REM..HOSTREM_DG

LENGTH

dg

datadg

default

default

91000

1000753077.1117.

dm

c1tlOdOs2

dID

datadg01
datadg02

c1 tlldOs2

auto
auto

2048
2048

4191264
4191264

dm
dm

datadg03
datadg04

clt14dOs2

auto

2048

c1tl5dOs2

auto

2048

4191264
4191264

ENABLED

AC"TIVE-

"2

21168

1 datadgOl

ii~

REM_RLNI<

J OFF

DEVICE

MODE
MODE
HODE

train12

-I To

interpret the
output, match header
-j lines with output lines.

-I

CONCAi~N~"~~.
0

..~M~~

clt10dO

RW
ENA

Displaying Information for All Volumes


To display the volume. plcx, and subdisk record information

vxprint

-g diskgroup

for a disk group:

-ht

In the output, the top few lines indicate the headers that match each type of output
line that follows. Each volume is listed along with its associated plexes and
subdisks and other VxVM objects.
dg is a disk group.
st is a storage pool (used in Intelligent

Storage Provisioning).

dm is a disk.
rv is a replicated volume group (used in VERITAS
rl

is an rlink

Volume Rcplicator).

(used in VERITAS Volume Rcplicator).

eo is a cache object.
vt is a volume template (used in Intelligent

Storage Provisioning).

v is a volume.
p L is a plcx.

sd is a subdisk.
sv is a subvolumc.
se is a storage cache.
de is a data change object.
sp is a snap object.
For more information,

s.:e the vxp ri nt; ( i m) manual page.

3-22

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynyhl

:.,;,:!(Jr)f:; Svmantec

Corpor auon

All nghts

reserved

S)111iHlI(,(

Object Views in the Main Window

!!!I Logs

SIZe
d~t.;dan~
rl ~t~rlgOl

c!~lcH!CI
(I.;jt~dll

If(rrnr1p(1
III ported

1('1:'8':8
I q:~ (,8

CDS
I',=
'''t?s

Highlight a volume, and click


the tabs to display details.

Displaying Volume Information: VEA


To display information about volumes in VEA. you can select from several
different views.
Object Views in the Main Window
You can view volumes and volume details by selecting an object in the object tree
and displaying volume properties in the grid:
To view the volumes in a disk group. select a disk group in the object tree and
click the Volumes tab in the grid.
To explore detailed components of a volume. select a volume in the object tree
and click each of the tabs in the grid.

Lesson 3 Creating a Volume and File System


Copyright

t: 2006

Symantec corcoreuoo

3-23
All nqh(s reserved

symantcc.

Viewing Basic Disk Information:


To display basic information
vxdisk

-0

alldgs

DEVICE

TYPE

cltlOdOs2

ell

about all disks:

list

VxVM
DISK

GROUP

STATUS

datadgOl

datadg

datadg02

datadg

online
online
online
online
online

invalid
invalid

Disks
..J

cl t14dOs2

auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
auto:none
auto:none

cl tlSdOs2

auto:none

online

invalid

clt16dOs2

auto:none

clt17dOs2

auto:none

online
online

invalid
invalid

cl tlldOs2
cl tl2dOs2
cltl3dOs2

Note: In a shared access environment,


when displaying disks, run
vxdctl enable frequently to rescanfor disk changes.

Free
Disk

Uninitialized
Disks

Displaying Disk and Disk Group Information


Displaying

Basic Disk Information:

ell

You use the vxdisk list command to display basic information about all disks
attached to the system. The vxdisk list command displays the:
Device names for all recognized disks
Type of disk, that is. how a disk is placed under VxVM control
Disk names
Disk group names associated with each disk
Status of each disk
In the output:
A status of onl ine. in addition to entries in the Disk and Group columns
indicates that the disk has been initialized or encapsulated, assigned a disk
media name, and added to a disk group. The disk is under Volume Manager
control and is available for creating volumes.
A status of onl ine without entries in the Disk and Group columns indicates
that the drive has been initialized or encapsulated but is not currently assigned
to a disk group.
A status of onl ine inval id indicates that the disk has neither been
initialized nor encapsulated by VxVM. The disk is not under VxVM control.
Note: On the HP-UX platform, LVM disks have a type of auto: LVM and a status
ofLVM.

3-24

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynght'~ 200ti Svrnaruec Corporatron All rights rescrveo

J~ -<\
,

S)111i1l1t('(

To display detailed information for a disk:


-g diskgroup
list
disk_name

vxdisk
vxdisk

-g

datadg

list

datadgOl

cltlOdOs2
cltlOdO
auto
train12
name=datadgOl
name=datadg

Device:
devicetag:
type:
hostid:
disk:
group:

id=l000753057.1114.train12
id=l000753077.1117.train12

To display a summary for all disks:


vxdisk

-s

list

To display detailed information about a disk. you use the vxdisk list
command with the name of the disk group and disk:
vxdisk

-g

diskgroup

disk

list

name

In the output:
Device
device
type

is the YxYM name


tag

for

the device access path.

is the name used by YxYM to refer to the physical disk.

is how a disk was placed under YM control. auto

is the default type.

hos t id is the name of the system that currently manages the disk group to

which the disk belongs: ifblank. no host is currently controlling this group.
is the YM disk media name and internal ID.
group is the disk group name and internal !D.
disk

To view a summary of information


vxdisk list command.

tor all disks. you use the - s option with the

Note: The disk name and the disk group name are changeable. The disk ID and
disk group !D are never changed as long as the disk group exists or the disk is
initialized.
Note: The detailed information displayed by this command will be discussed later
in the course

Lesson

3 Creating a Volume and Fite System


Copyright

C'"

2006

Symantor;

3-25
Corporation.

All nqhts

reserved

svmantcc

cap_'t;
d,jt,JlJ!)OI

iht"d<J

Keeping Track of Your Disks


By viewing

disk information.

you can determine if a disk has been initialized

and

added to a disk group, verify the changes that you make to disks, and keep track of
the status and configuration

of your disks.

Displaying Disk Information: VEA


The status of a disk can be:

Not Initialized: The disk is not under YxYM


as a raw device by an application.

control. The disk may be in use

Free: The disk is initialized by YxYM but is not in a disk group. You cannot
place a disk in this state using YEA, but YEA recognizes disks that have been
initialized

through other interfaces.

Foreign: The disk is under the control uf another host.


Imported: The disk is in an imported disk group.
Deported: The disk is in a deported disk group.
Disconnected:The disk contains subdisks that are not available because of
hardware failure. This status applies to disk media records tor whieh the
hardware has been unavailable and has not been replaced within YxYM.

External: The disk is in use by a foreign manager.


Inactive/Import failed: The disk group is not imported but the disks have thc
same host I D tag as the hostnarne of the system, for example, if the disk group
is deported using the same hostname.

3-26

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYright ~ 2006 Svmeotec Corporation. All nqhts reserved

,S)11mll1tx.

General

CDS:

v~

Efl:

No

5t~tus'

Ilfl)OI'ted

Capacity:

';j"%"i

i Select
'I

Unalocatedsp.!Jce: ~95!

a unit to
display capacity and
unallocated space in
other units.

UDIO'
No

see-e:
Reserved:
Hot use:

Yes

AlocatOl'Dlsk:

cowrent:

Viewing Disk Properties: VEA


In YEA, you can also view disk properties in the Disk Properties window. To open
the Disk Properties window, right-click a disk and select Properties.
The Disk Properties window includes the capacity of the disk and the amount of
unallocatcd space. You can select the units for convenient display in the unit of
your choice.

Lesson

3-27

3 Creating a Volume and File System


Copvnqht

'r;; 2006

Svmantec

Corporation

All rights

reserved

symantec

To display disk groups:


vxdg list
NAME

STATE

datadg
newdg

enabled,cds
enabled, cds

ID

9695836l3.l025.cassius
971216408.l133.cassius

To display free space in a disk group, use one of these:


vxassist
-g diskgroup
help space
vxdg -g diskgroup
free

Displaying Disk Group Information: ell


To display disk group information:
Use vxdg list to display disk group names, states, and IDs for all imported
disk groups in the system.
Use vxdg free to display tree space on each disk. This command displays
free space on all disks in all disk groups that the host can detect. Add -g
diskgroup
to restrict the output to a specific disk group.
Note: This command does not show space on spare disks. Reserved disks are
displayed with an "r " in the FLAGS column.
Use vxdisk -0 alldgs 1 ist to display all disk groups, including
deported disk groups. For example:
vxdisk
DEVICE
Disk - 1
Disk 7

-0

alldgs

list
DISK
datadgOl

TYPE
auto:cdsdisk
auto:cdsdisk

GROUP
datadg
(acctdg)

STATUS
online
online

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

3-28
Copynoht

:. 2006

Svmaulec

Corporation

All rights

ruserveo

Viewing Disk Group Information: VEA


Right-click a disk group
and select Properties.

ditadg

Imported

Stalus
Id

11~)094318 25.~oulsecl>lV9

CDS

"es

Disks:
Volumes
Yes

ColTtlnt'l9!iion

Version

140

EtlBblM

'tag

JlLl/V.ftlo"ornooe-

01""

uetacn pOll

Ol$kgroupfail

POlicy

Olobal
Dgdlsabl~

.AJlo!pd snes

911a ConSIstency

O.

sne:

3 ~-':f,

,-.p

rree eper e

1 :::~:I:,

,.,[3

Viewing Disk Group Properties: VEA


The object tree in the VEA main window contains a Disk Groups node that
displays all of the disk groups attached to a host. When you click a disk group. the
VxVM objects contained in the disk group are displayed in the grid.
To view additional infonnation about a disk group. right-click a disk group and
select Properties.
The Disk Group Properties window is displayed. This window contains basic disk
group properties. including:
Disk group name. status. 10. and type
Number of disks and volumes
Disk group version
Disk group size and tree space
Note: On IIP-UX. there is another attribute between the Version and Enabled
attributes. which is Shared: No.

Lesson 3 Creating a Volume and File System


Copynqht '''' 2006 Syrnantec Corporanon All rights reserved

3-29

syrnantec

Removing a Volume
When a volume is removed, the space used by the
volume is freed and can be used elsewhere.
Unmount the file system before removing the
volume.
VEA:

Select the volume that you want to remove.


Select Actions->Delete Volume.

vxassist
vxassist
vxassist

remove

volume:

-g diskgroup
remove volume volume
-g datadg remove volume datavol

name

vxedit:
vxedit
vxedit

-g diskgrollp
-rf rm volume
-g datadg
-rf rm datavol

name

Removing Volumes, Disks, and Disk Groups


Removing a Volume
Only remove a volume if you are sure that you do not need the data in the volume,
or if the data is backed up elsewhere. A volume must be closed before it can be
removed. For example. if the volume contains a tile system. the tile system must
be unmounted. You must edit the OS-specitic tile system table tile manually in
order to remove the entry for the tile system and avoid errors at boot. If the volume
is used as a raw device. the application. such as a database. must close the device.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

3-30
Cupyllghl'f:

20(}6

S,.rn<l!lIU.

Corporauon

All fights

reserved

symaruec

Before removing a disk, you may need to evacuate data


from the disk to another disk in the disk group.
VEA:

Select the disk that you want to evacuate.

Select Actions->Evacuate

Disk.

vxdiskadm:
"Move

volumes

from

a disk"

ell:
vxevac

-g

d i ek qr oiip

vxevac

-g

datadg

from

datadgOl

disk

[to_disk]

datadg02

To evacuate to any disk except for datadg03:


vxevac

-g

datadg

datadg02

!datadg03

Evacuating a Disk
Evacuating a disk moves the contents of the volumes on a disk to another disk. The
contents of a disk can be evacuated only to disks in the same disk group that hale
sufficient free space.

Lesson

3 Creating a Volume and File System


Copyrighl

'[~ 2006

Syrnantec

3-31
Corporation

All rights

reserved

symnntrc

roup
VEA:

Select the disk that you want to remove.

Select Actions->Remove

Disk from Disk Group.

vxdiskadm:
"Remove

a disk"

ell:
vxdg
-g diskgroup
rmdisk
disk_name
vxdiskunsetup
[-CJ device_tag
Example:

vxdg

-g

datadg

vxdiskunsetup

rmdisk
Disk

Remove the disk


from the disk
group, and then
uninitiatize it.

datadg02

Removing a Disk
I f you select all disks for removal from the disk group, the disk group is destroyed
automatically.
You can verify the removal by using the vxdisk

list

information. A deconfigured disk has a status ofonline


longer has a disk media name or disk group assignment.
The vxdiskunsetup

command to display disk


invalid

and no

Command

After the disk has been removed from its disk group. you can remove it from
Volume Manager control completely by using the vxdiskunsetup
command.
This command reverses the configuration of a disk by removing the public and
private regions that were created by the vxdisksetup
command. The
vxdiskunsetup

command docs not operate on disks that are active members of

an imported disk group.


This command does not usually operate on disks that appear to be imported by
some other hostv for example, a host that shares access to the disk. You can use
the - C option to force deconfiguration

otthc disk, removing host locks that may be

detected.

3-32

VERITAS Storage Foundation 5.0 for UNIX. Fundamentals


COPYright C:'2006 Symaotec

Ccrporanoo

All righL<; reserved

olddg

Destroying

DO
DO

a disk group:

Means that the disk group no longer


exists

___L _

Removes all disks

Is the only method for freeing the last


disk in a disk group

VEA: Actions->Destroy
ell:

vxdg

destroy

Disk Group
diskgroup

Example: To destroy the disk group


olddg and place its disks in the free disk
pool:
vxdg

destroy

olddg

Destroying a Disk Group


Destroying a disk group permanently removes a disk group from Volume Manager
control, and the disk group ceases to exist. When you destroy a disk group, all of
the disks in the disk group are reinitialized as empty disks. Volumes and
configuration information about the disk group are removed.
Because you cannot remove the last disk in a disk group, destroying a disk group is
the only method to free the last disk in a disk group for reuse. A disk group cannot
be destroyed ilany volumes in that disk group arc in use or contain mounted file
systems. The bootdg disk group cannot be destroyed.
Caution: Destroying a disk group can result in data loss. Only destroy a disk
group if you are sure that the volumes and data in the disk group are not needed.
Destroying a Disk Group: VEA
The disk group to be destroyed

Select:
Navigation

Input:

path:

Actions-c=Desrroy Disk Group


Group name: Specify the disk group to be destroyed.

Destroying a Disk Group: ell


To destroy a disk group 11-0111
the command line, use the vxdg destroy
command.
Note: You can hring back a destroyed disk group by importing it with its dgid.

3-33

Lesson 3 Creating a Volume and File System


Copyriglll

=:

2006

Symantec

Corporauoo

All rights

reserved

Key Points
In this lesson, you learned how to create a volume with a
file system. This lesson also described device-naming
schemes and how to add a disk to a disk group, in
addition to how to view configuration
information for
volumes, disk groups, and disks, In addition, you learned
how to remove a volume, disk, and disk group.

Reference Materials
- VERITAS Volume Manager Administrator's
- VERITAS Storage Foundation

Installation

Guide
Guide

svmantec.

Lab 3
Lab 3: Creating a Volume and File System
In this lab, you create new disk groups, simple
volumes, and file systems, mount and
unmount the file systems, and observe the
volume and disk properties.
The first exercise uses the VEA interface. The
second exercise uses the command-line
interface.

For Lab Exercises, see Appendix A,


For Lab Solutions, see Appendix B.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions. "Lab :;.

(I,';IIIII>!

a V,)IIIlile' .uul

File ~y'k'I11." page i\-I~

Appendix B provides complete lab instructions and solutions. "/';iIJ 3 Sohuion-:


Cn.:atillg 'I Volume and lrlc S\'kPl." I'd!!': 1:121

3-34

VERITAS
Copynqnt',

2006

Svrnanu,c

Storage
Corporatron.

Foundation
AUuqtus

reserved

5,0 for UNIX: Fundamentals

Lesson 4
Selecting Volume Layouts

Lesson Introduction
"

Lesson 1: Virtual Objects

"

Lesson 2: Installation

and Interfaces

" Lesson 3: Creating a Volume and File


System

" ..~(!!>!>()Il~~:~E!'.(!<:tiny'~~o~~~E!_~~r~~!!>_.
"

Lesson 5: Making Basic Configuration


Changes

"

Lesson 6: Administering

"

Lesson 7: Resolving
Problems

File Systems

Hardware

Will

ant t'(

Lesson Topics and Objectives


Topic

After completing
will be able to:

this lesson, you

Topic 1: Comparing
Volume Layouts

Identify the features, advantages, and


disadvantages of volume layouts
supported by VxVM.

Topic 2: Creating Volumes


with Various Layouts

Create concatenated, striped, and


mirrored volumes by using VEA and
from the command line.

Topic 3: Creating a
Layered Volume

Create layered volumes by using VEA


and from the command line.

Topic 4: Allocating
Storage for Volumes

Allocate storage for a volume by


specifying storage attributes and
ordered allocation.

4-2

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


C{JP~1I9hl

:- 2006

Symanter

Cbrporauon

All fights

reservou

---'S)~il;~tt'(

~V1if.~-

Concatenated Layout
Disk Group

datadg

Volume

datavol

Plex
Subdisks

VxVM Disks
Subdisks

Comparing Volume Layouts


Each volume layout has different advantages and disadvantages. For example. a
volume can be extended across multiple disks to increase capacity. mirrored on
another disk to provide data redundancy. or striped across multiple disks to
improve [/0 performance. The layouts that you choose depend on the levels of
performance and reliability required by your system.

Concatenated Layout
A concatenated volume layout maps data in a linear manner onto one or more
subdisks in a plcx. Subdisks do not have to be physically contiguous and can
belong to more than one VM disk. Storage is allocated completely from one
subdisk before using the next subdisk in the span. Data is accessed in the
remaining subdisks sequentially until the end of the last subdisk.
For example. if you have 14 GB of data then a concatenated volume can logically
map the volume address space across subdisks on di fferent disks. The addresses
o GB to 8 (iB of volume address space map to the first 8-gigabytc subdisk. and
addresses 9 GB to 14 (iB map to the second 6-gigabyte subdisk. An address offset
of 12 GB. therefore. maps to an address onset of4 GB in the second subdisk.

Lesson 4 Selecting Volume Layouts


Copyright < 200[, Synrantec Corporation

4-3
All riqhts rcserveo

Striped Layout
Disk Group

datadg
datavol

Volume

datavol-Ol
Plex
Subdisks

.,c

SU1 SU2 SU3

SU4 SU5 SU6

(.)

SU7 SUB SU9


SU10 SU11 SU12

o"

VxVM Disks
Subdisks

Striped Layout
A striped volume layout maps data so that the data is interleaved, or allocated in
stripes, among two or more subdisks on two or more physical disks. Data is
allocated alternately and evenly to the subdisks of a striped plex.
The subdisks are grouped into "columns." Each column contains one or more
subdisks and can be derived from one or more physical disks. To obtain the
maximum performance benefits of striping, you should not use a single disk to
provide space for more than one column.
All columns must be the same size. The minimum size of a column should equal
the size of the volume divided by the number of columns. The default number of
columns in a striped volume is based on the number of disks in the disk group.
Data is allocated in equal-sized units, called stripe 1111 its. that are interleaved
between the columns. Each stripe unit is a set of contiguous blocks on a disk. The
stripe unit size can be in units of sectors, kilobytes. megabytes, or gigabytes. The
default stripe unit size is 64K, which provides adequate performance for most
general purpose volumes. Performance of an individual volume may be improved
by matching the stripe unit size to the 110 characteristics of the application using
the volume.

4-4

VERITAS

Storage

Cop)'flyhl .~ ?()(J6 Svmantec Corporauou

Foundation
All fights reserved

5.0 for UNIX: Fundamentals

~-~"'~.

~~ ..' ";'.r&~;11[~-------'~11a;lIt~~

Mirrored Layout

r+:

--;

Disk Group

datadg

Volume
Plex

datavol-Ol

Subdisks

:=::

datavol

datadgOl-02

datadg03-01
datadgO~-02

Subdisks

----..J!L

r-datadg02--

datadgOl

datadgOl-02

have disk space


from different
disks to achieve
redundancy.

datavol-02

...!.
VxVM Disks

Each pie x must

r-datadg03--

dat4c",;;;
;;].
datadg02..02

d~tadg')?

J&.tadti'.!;'

...
J,"lt.il~i~~t)_~~ 0

:;.~

datadg03-01
02

Mirrored Layout
By adding a mirror to a concatenated or striped volume, you create a mirrored
layout. A mirrored volume layout consists of more than one plcx that duplicate the
information contained in a volume. Each plcx in a mirrored layout contains an
identical copy of the volume data. In the event of a physical disk failure and when
the plex on the failed disk becomes unavailable, the system can continue to operate
using the unaffected mirrors.
Although a volume can have a single plcx. at least 111'0 plexes are required 10
provide redundancy of data. Each of these plcxes must contain disk space trom
di tlcrcnt disks to achieve redundancy.
Volume Manager uses true mirrors. which means that all copies of the data are the
same at all rimes. When a write occurs to a volume. all plexcs must receive the
write before the write is considered complete.
Distribute mirrors across controllers to eliminate the controller as a single point of
failure.

Lesson

4 Selecting Volume Layouts


Copyright

4-5
f

2006

Symantcc.

Corporation

All rights

reserved

RAID-5 Layout
Disk Group
Volume
Plex
Subdisks

VxVM Disks I,---I~~~,..J~~!I.,


Subdisks
P = Parity; a
calculated value used
to reconstruct data
after disk failure.

1---------------'

RAIO-5 Layout
A RAID-5 volume layout has the same attributes as a striped plex, but it includes
one additional column of data that is used for parity. Parity provides redundancy.
Parity is a calculated value used to reconstruct data after a failure. While data is
being written to a RAID-5 volume. parity is calculated by performing an exclusive
OR (XOR) procedure on the data. The resulting parity is then written to the
volume. If a portion ofa RAIO-5 volume fails, the data that was on that portion of
the failed volume can be re-created from the remaining data and parity
information.
RAID-5 volumes keep a copy of the data and calculated parity in a plex that is
striped across multiple disks. Parity is spread equally across columns. Given a
five-column RAID-5 where each column is I GB in size, the RAID-5 volume size
is 4 GB, One column of space is devoted to parity, and the remaining four I-G8
columns arc used tor data.
The default stripe unit size for a RAID-5 volume is 16K. Each column must be the
same length but may be made from multiple subdisks of variable length. Subdisks
used in different columns must nut be located on the same physical disk.
RAIO-5 requires a minimum of three disks for data and parity. When implemented
as recommended. an additional disk is required for the log.
RAIO-5 cannot be mirrored.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

4-6
COPYJlghl

: 2()06

Syrl1antec

Cnrporauon

AU fights

reserved

,S)111<1ntt'(

Comparing Volume Layouts


Striping

Concatenation

.,

CIl
01

~c
10

>

"0
0(

.,

CIl
01

s
e
10

>

Removessize
restrictions

Parallel data
transfer

Better
utilization of
free space

Load-balanclng

Simplified
administration

Improved
performance
(if properly
configured)

Mirroring

RAID5

Improved
reliability and
availability

Redundanc
through panty

Improved read
performance
Fast recovery
through
logging

No redundancy No redundancy
Single disk
Single disk
failure causes
failure causes
volume failure.
volume failure.

"0

.,
10

Requires more

disk space
Slightly
slower write
performance

Requires less
space than
mirroring
Improved read
performance
Fast recovery
through logging
Slower write
performance
than mirroring
Poor
performance
after a disk
failure

Comparing Volume Layouts


Concarcnatlon: Advantages
Removes size restrictions:
Concatenation removes the restriction on size or
storage devices imposed by physical disk size.
Better utlllzarlon of free space: Concatenation enables better utilization of
free space on disks by providing for the ordering of available discrete disk
space on multiple disks into a single addressable volume.
Simplified administration:
System administration complexity is reduced
because making snapshots and mirrors uses any size space. and volumes can be
increased in size by any available amount.
Concatenation:

Disadvantages

No protection against disk failure: Concatenation does not protect against disk
failure. A single disk failure results in the failure of the entire volume.
Striping:

Advantages

Improved performance through parallel data transfer: Improved


performance is obtained by increasing the effective bandwidth of the I '0 path
to the data. This may be achieved by a single volume JiO operation spanning
across a number of disks or by multiple concurrent volume 1/0 operations to
more than one disk at the same time.

Load-balancing: Striping is also helpful in balancing the I/O load from


multiuser applications across multiple disks.

Lesson 4 Selecting Volume Layouts


Copyriqht

4-7
'. 20(16 Syrnantec

Corporation

All dgtlls

reserved

Striping:

Disadvantages

No redundancy:

Striping alone offers no redundancy or recovery features.

Disk failure: Striping a volume increases the chance that a disk failure results
in failure of that volume, For example, if you have three volumes striped
across two disks, and one of the disks is used by two of the volumes, then if
that one disk goes down, both volumes go down.
Mirroring:

Advantages

Improved
of anyone

reliability
and availability:
With concatenation or striping, failure
disk makes the entire plex unusable. With mirroring, data is

protected against the failure of anyone disk. Mirroring


and availability

Improved read performance:


from which to read the data.
Mirroring:

improves the reliability

of a striped or concatenated volume.


Reads benefit from having multiple

places

Disadvantages

Requires more disk space: Mirroring requires twice as much disk space,
which can be costly for large configurations. Each mirrored plex requires
enough space for a complete copy of the volume's data.
Slightly slower write performance:
Writing to volumes is slightly slower,
because multiple copies have to be written in parallel. The overall time the
write operation takes is determined
disk involved

by the time needed to write to the slowest

in the operation.

The slower write performance

ofa mirrored volume is not generally significant

enough to decide against its use. The benefit of the resilience that mirrored
volumes provide outweighs the performance reduction.
RAIO-5:

Advantages

Redundancy through parity: With a RAID-5 volume layout data can be recreated from remaining data and parity in case ofthe failure of one disk.
Requires less space than mirroring:
than a complete copy of the data.
Improved

read performance:

performance
Fast recovery

RAIO-5

RAID-5

stores parity information,

provides similar improvements

rather
in read

as in a normal striped layout


through

logging:

RAID-5

logging minimizes

recovery time in

case of disk failure.


RAIO-5:

Disadvantages

Slow write performance:


The performance overhead for writes can be
substantial, because a write can involve much more than simply writing to a
data block. A write can involve reading the old data and parity, computing the
new parity, and writing the new data and parity. If you have more than twenty
percent writes, do not use RAID-5.
Very poor performance
after a disk failure: After one column fails, all 1/0
performance goes down. This is not the case with mirroring, where a disk
failure does not have any significant

4-8

effect on performance.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Couvnqnt ,~ 2006 Symantec Corpcratron

All nqhts reserved

Selecting
Specify

a Layout Type: VEA

volume attributes.

~olume name: ~~':;;";:~""~""~~~C~~"""'~'""


cornmgnt
Si~e'

Minor Info

Layout
.~. Concatenatel!
~!

Qtrlped

-:' fiAID-5

C;

CQn(atenatE'd

C..StrlQed

r:-JEna~le [astResync

Mirrored

f] !nltiallze zero

Mirrored

Creating Volumes with Various Layouts


You can create volumes with a variety or layouts. In V[i\
Attributes window. select:

in the Specify Volume

Layout: Select a layout type from the group of options. The default layout is
concatenated.

Concatenated:The volume is created using one or more regions of


specified disks.

Striped: The volume is striped across two or more disks. The default
number of columns across which the volume is striped is two. and the
default stripe unit size is 12X sectors (64K) on Solaris. i\IX. and Linux:
128 sectors (128K) on I II'-UX. You can specify different values.

Concatenated Mirrored

and Striped

Mirrored: These options denote

layered volume layouts.

Mirror Info:
Mirrored: Mirroring

is recommended. To mirror the volume. mark the


Mirrored check box. Only striped or concatenated volumes can be
mirrored. Ri\ID-S vol limes cannot be mirrored.

Total mirrors: Type the total number of mirrors for the volume. i\ volume
can have up to 32 plexcs: however. the practical limit is 31. One plex is
reserved by VxVM to perform restructuring or relocation operations.

4-9

Lesson 4 Selecting Volume Layouts


COP/right

'~~.2006

Svmantoc

Corporation

Ail right~

reserved

svrnantec

Concatenated Volume: CLI


To create a concatenated volume:
vxassist
-g datadg make datavol

109

Disk group name

This command creates a concatenated volume


called datavol
with a length of 10 gigabytes, in
the disk group datadg, using any available disks.

Creating a Concatenated Volume: CLI


By default. vxassist creates a concatenated volume that uses one or more
sections of disk space. The vxassist command attempts to locate sufficient
contiguous space on one disk for the volume. However. if necessary, the volume is
spanned across multiple disks. Vx VM selects the disks on which to create the
volume.
Note: To guarantee that a concatenated volume is created. include the
Layou t nos t ri pe attribute in the vxassist
makecommand. Without the
layout attribute. the default layout is used that may have been changed by the
creation of the letc/default/vxassist
file.
e

For example:
vxassist

-g datadg

make datavol

109 layout=nostripe

If you want the volume to reside on specific disks. you can designate the disks by
adding the disk media names to the end of the command. More than one disk can
he specified.
vxassist
[disks ...

4-10

[-g
J

diskgroupJ

make

volume

name length

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyright :,2006 Symantec Lorporatron All rights reserved

5)1 \1;1111 <;'c .

Striped Volume: ell


To create a striped volume:
vxassist
-g
layout=stripe
[disks

diskgroup

[ncol=n]

make volume_name
[stripeunit=size]

length

...

Examples:
vxassist

-g

layout=stripe
vxassist

make payvol

ncol=3
-g

layout=stripe
acctdgOl

acctdg

acctdg

make

ncol=3
acctdg02

2g \

!acctdg04
expvol

2g \

stripeunit=256k

acctdg03

Creating a Striped Volume: CLI


To create a striped volume. you add the layout type and other attributes to the
vxassist
make command.
layout=stripe

designates the striped layout.

ncol =n designates the number of stripes. or columns. across which the


volume is created. This attribute has many aliases. For example. you can also
use ns t r i pe e n or s t r i pe s e n.
The minimum number or stripes in a volume is 2 and the maximum is R. You
can edit these minimum and maximum values in /ete/default/
vxassist
using the min _ eol umns and max_eol umns attributes.
stripeunit=size
default is MK.

specifics the size of the stripe unit to he used. The

To stripe the volume across specific disks. you can specify the disk media
names at the end of the command. The order in which disks are listed on the
command line does not imply any ordering of disks within the volume layout.
To exclude a disk or list of disks. add an exclamation point ( ! ) before the disk
media names. For example. ! datadgOl specifies that the disk datadgOl
should not be used to create the volume.

Lesson

4 Setecting

4-11

Volume Layouts
Cop yeiqht ;;; 2006 Swnantec

Corporation

All rights

r"!S(,lIIed

syrnarucc.

Mirrored Volume: ell


To create a mirrored volume:
vxassist
-g diskgroup
[-b] make volume
length
layout=mirror
[nmirror=number]

name

,Ex~,!,plel!:
! Concatenated

i and

i Specify
t

vxassist
-g datadg
layout;mirror

make datavol

Sg \

vxassist
-g datadg
layout=stripe,mirror

make datavol
nmirror=3

Sg \

vxassist
-g datadg
Sg layout=stripe,mirror

-b make datavol
nmirror=3

mirrored
three

mirrors.

; Run process in

l background.

Creating a Mirrored Volume: CLI


To mirror a concatenated volume, you add the Layout erni r r o r attribute in the
vxassist command.
To specify more than two mirrors, you add the nmirror

attribute.

When creating a mirrored volume. the volume initialization process requires


that the mirrors be synchronized. The vxassist command normally waits for
the mirrors to be synchronized before returning to the system prompt. To run
the process in the background. you add the -b option.

Creating a Mirrored and Logged Volume: CLI


When you create a mirrored
Loq t ype

volume, you can add a dirty region log by adding the

d r I attribute:

vxassist
-g diskgroup
[-b] make volume_name length
layout=mirror
logtype=drl
[nlog=n]
Specify Loqr.ype d r I to enable dirty region logging. A log plex that consists
e

of a single subdisk is created.


If you plan to mirror the log. you can add more than one log plex by specifying
a number of logs using the nLoq e n attribute. where n is the number of logs.
To create a concatenated volume that is mirrored and logged:

vxassist
-g datadg
logtype=drl

make datavol

Sm layout=mirror

Note: Dirty regions logs are covered in a later lesson.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

4-12
Copynght

,; ~006

Symantec

Corporauoo

All fights

reserved

~synlilnti'(.

Estimating Volume Size: ell


To determine the largest possible size for a volume:
vxassist

-g

diskgroup

-g

datadg

maxsize

attributes

Example:
vxassist
Maximum

volume

maxsize

size:

layout=raid5

376832

(184Mb)

To determine how much a volume can expand:


-g diskgroup

vxassist

maxgrow

volume

name

Example:
vxassist

-g

Volume

datavo1

1677312

datadg
can

maxgrow
be

datavol

extended

by

366592

to

(819Mb)

Estimating Volume Size: ell


The vxassist
command can determine the largest possible size for a volume that
can currently be created with a given set of attributes. vxass ist can also
determine how much an existing volume can be extended under the current
conditions.
This maxs i ze command does not create the volume but returns an estimate ofthe
maximum volume size. The output value is displayed in sectors, by default.
I I' the volume with the speci lied attributes cannot be created, an error message is
returned:
VxVM vxassist
ERROR V-S-1-7S2
within
the given
constraints

No volume

can

be

created

The maxgrow command docs not rcsizc the volume but returns an estimate or
how much an existing volume can be expanded. The output indicates the amount
by which the volume can be increased and the total size to which the volume can
grow. The output is displayed in sectors, by default.

Lesson 4 Selecting

4-13

Volume Layouts
Copynqht

'h- 2006

Svmanter.

Corporauon

All !lgnts

reservec

symantcc

Observing Volume Layouts in VEA


i Highlighta ~~I~;;;~-~~d-sel~~t-Acti~~s--->-L-ay-o-u-t-V-i-e-w--I
[jle \liew
\/olurn"
T~...pe

(j~tav(lI02

Size
#(:01

2097152
c

ptex ciatavol(l201
Type ~::tripeli

~:ttlpe(i

Sul)(Jlsk

Size

I r-r- Siale: P.,ltached


r'
PrefenecP
No

State
Healthy
#1\!1irrors
1

~;tripe Sz
#Col: 2

(latadgO"102
"1048576

Colurnnl
Offset
20~1T152
Plex Offset (I
Usage.
Strlpeci

128

Stripe Sz.1 28

Subdlsk
datadg(l2-01
t:izel(l48576
Column
0
Offset
0
F'lex Offset (I
Usage
Sltiped

I SelectVieW->Hori~~;:;t~I~~----I'
View->Verticalto changethe
orientationof the diagram.

Volume Layout Window


The Volume Layout window displays a graphical view of the selected volume's
layout, components, and properties. You can select objects or perform tasks on
objects in the Volume Layout window. This window is dynamic. so the objects
displayed in this window arc updated automatically when the volume's properties
change.
To display the Volume Layout window, highlight a volume and select
Actions--->Layout View.
The View menu changes the way objects are displayed in this window. Select
Vicw->Horizontalto
display a horizontal layout and View->Vertical
to display
a vcrticul layout.

VERITAS

4-14
Copyright

~ 2006

Svmantec

Storage Foundation
Corporation

AlIllghls

reserveo

5.0 for UNIX: Fundamentals

S)111ill1ltX .

Volume to Disk Mapping Window


Dilk Group: daudg
Volumes

Click a triangle
hide subdisks.

j~..!.~_.

to

iii

.__ ._._ ...


__ ._'~.,) Dilk Group: datadg

datodgOl-0l (0: 20111152)

Volumes

datodgOl.02 (2091152: 1048516)


0dg02.
datodg0201(0: 1048516)

Click a dot to
highlight an
intersecting
row
and column.

;' idatadg01

dol_'-01
dol

(0: 2(97152)
102(2097152: 1048576)

',I

datodg02.01(0: 1048576)

Volume to Disk Mapping Window


The Volume to Disk Mapping window displays a tabular view of volumes and their
relationships to underlying disks. To display the Volume to Disk Mapping window.
highlight a disk group and select Actions vc-Disk/Volumc
Map.
To view subdisk layouts. click the triangle button to the left of the disk name. or
select View-- >Expand All.
To help identify thc row and column headings in a large grid, click a dot in the grid
to highlight the intersecting row and column.

Lesson 4 Selecting Volume Layouts


Copyrigh1'~ 200b Sjmaruec Corporation, All nghlS reserved

4-15

symarucc

Volume View Window


: Highlight a volume and select Actions->Volume
"

1:.id:EX~~J'-""
AddatavoI01!"':'!TYPe:

View.
..

Concat

Size. 1.000 GB Mirrors

"'-1

..............

1 Logged: No

.:I3t3dg01.(l'1

IJJ I
LJdatavoI02!'

jJ

1.(100 G8

II
IIType. Striped

Size. 1.000 OB Mirrors

1 Logged. No

datadg0201

dal.ldg01-02

512 000 MB

512.000

columnO

cctvmrr

M8
1

Volume View Window


The Volume View window displays characteristics of the volumes on the disks. To
display the Volume View window. select a volume or disk group and select
Aetions->Volumc
View.
Display options in the Volume View window include:
Expand: Click the Expand button to display detailed information about
volumes.
New volume: Click the New Volume button to invoke the New Volume
wizard.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

4-16
Copyright

'h- 21106 Symantec

Coenor.auon

All fights

reserved

,S)111iHllt'( .

Disk View Window


: Highlight a volume and select Actions->Disk

I~

Expand

I~

Vol Details

Volume Overview

1000

<38 of 1 000 G8

Q datadgldatadg02.c1
datavol02

t2d7s2

;,1 av ,,102(SI';p,d)

Size' 4 961 GB

D( Projection

Free: 3.461 G8 (70%)

5-12.000 MB of 1.000 GB
He attbv

Health\,

512000

Subdisk Details

Size: 4.961 G8

@datadgldatadg01:t1t2d6s2
datavcln t (Concatenated)

View.

3061GB

Free. 4.461 GB (90%)

(Stnpe d)
M8

of

1.000 08

14061 GB

Health.,

Disk View Window


The Disk View window displays a close-up graphical view of the layout of
subdisks in a volume. To display the Disk View window. select a volume or disk
group and select Actions->Disk
View.
Display options in the Disk View window include:
Expand: Click the Expand button to display detailed information about all
disks in the Disk View window.
Vol Details: Click the Vol Details button to include volume names, layout
types. and volume status for each subdisk.
Projection: Click the Projection button to highlight objects associated with a
selected subdisk or volume. Projection shows the relationships between objects
by highlighting objects that are related to or part ofa specific object.
Caution: You can move subdisks in the Disk View window by dragging subdisk
icons to different disks or to gaps within the same disk. Moving subdisks
reorganizes volume disk space and must be performed with care.

Lesson 4 Selecting Volume Layouts


Copyright

4-17
r.

2006 Syrnantec Corporation Att rights reserved

How Do Layered Volumes Work?


Top-Level Volume

Volumes are
constructed from
subvolumes .
The top-level volume
is accessible to
applications.

Volume

Volume

Subvolumes

ISUbdIS,

FUbdiSkl

Plex

Plex

ISUbdiS1
Plex

Advantages
Improved
redundancy
Faster recovery
times
Disadvantages
Requires more
VxVM objects

Underlying
Disks

Creating a Layered Volume


What Is a Layered Volume?
Vx VM provides two ways to mirror your data:
Original \'xVM mirroring: With the original method of mirroring, data is
mirrored at the plex level. The loss of a disk results in the loss of a complete
plcx. A second disk failure could result in the loss ofa complete volume if the
\ olume has only two mirrors. To recover the volume, the complete volume
contents must be copied from backup.
Enhanced mirroring: VxVM 3.0 introduced support for an enhanced type of
mirrored volume called a layered volume, A layered volume is a virtual
Volume Manager object that mirrors data at a more granular level. To do this,
VxVM creates subvolumes from traditional bottom-layer objects, or subdisks.
These subvolumes function much like volumes and have their own associated
plexcs and subdisks.
With this method of mirroring, data is mirrored at the column or subdisk level.
Loss of a disk results in the loss of a copy of a column or subdisk within a plex.
Further disk losses may occur without affecting the complete volume. Only the
data contents of the column or subdisk affected by the loss of the disk need to
be recovered. This recovery can be performed from an up-to-date mirror of the
failed disk.
Note: Only VxVM versions 3.0 and later support layered volumes. To create a
layered volume. you I11UStupgrade the disk group that owns the layered
volume to version 60 Dr later.

4-18

VERITAS

Storage

Foundation

COPYright r:; ;m06 Symantec Corporation. All rlgtllS reserved

5.0 for UNIX. Fundamentals

How Do Layered Volumes Work?


In a regular mirrored volume. top-level plcxcs consist of subdisks. In a layered
volume. these subdisks are replaced by subvolumcs. Each subvolumc is associated
with a second-level volume. This second-level volume contains second-level
plexes, and each second-level plcx contains one or more subdisks.
In a layered volume. only the top-level volume is accessible as a device for use by
applications.
Note: You can also build a layered volume from the bottom up by using the
command. For more information. see the vxmake (1m) manual page.

vxmake

Layered Volumes: Advantages


Improved redundancy: Layered volumes tolerate disk failure better than
non layered volumes and provide improved data redundancy.
Faster recovery times: If a disk in a layered volume filils. a smaller portion of the
redundancy is lost. and recovery and resynchronization times are usually quicker
than for a non layered volume that spans multiple drives.
For a stripe-mirror volume. recovery of a single subdisk failure requires
resynchronization of only the lower plex, not the top-level plcx. For a mirror-stripe
volume. recovery of a single subdisk failure requires resynchronization of the
entire plex (full volume contents) that contains the subdisk.
Layered Volumes: Disadvantages
Requires more VxVM ohjccts: Layered volumes consist of more VxVM objects
than non layered volumes. Therefore. layered volumes Illay Ii II up the disk group
configuration database sooner than nonlaycrcd volumes. When the configuration
database is full. you cannot create more volumes in the disk group.
With SF 5.0. the default size of the private region is 32 MB. Each VxVM object
requires about 25() bytes.
Note: On the Solaris platform. in prc=l.v tonnat, the private region size is rounded
up 10 the cylinder boundary. With modern disks with large cylinder sizes. this size
can be quire large.
The private region can be made larger when a disk is initialized. The size cannot be
changed once disks have been initialized.

Lesson 4 Selecting Volume Layouts


Copyrigtll

4-19
2006

Symantec

Corporation.

All rights

reservoc

Traditional Mirroring

Plex

i sd = subdisk
diskOl disk02 disk03 disk04

Volume Status

Down

u
Down

Comparing Regular Mirroring with Enhanced Mirroring


To understand the purpose and benefits of layered volume layouts, compare
regular mirroring with the enhanced mirroring of layered volumes in a disk failure
scenario.
Regular Mirroring

The example illustrates a regular mirrored volume layout called a mirror-stripe


layout. Data is striped across two disks, diskOl and disk03, to create one plex,
and that plex is mirrored and striped across two other disks, disk02 and disk04.
If two drives fail, the volume survives 2 out 01'6 (1/3) times. As more subdisks are
added to each plcx, the odds of a traditional volume surviving a two-disk failure
approach (but never equal) 50 percent.
I I' a disk fails in a mirror-stripe layout, the entire plex is detached, and redundancy
is lost on the entire volume. When the disk is replaced, the entire plcx must be
brought up-to-dare. or resynchronized.

VERITAS

4-20
C0IJynyhi

ic 2006 Svmantec

Storage
Corporauon.

Foundation
All

nqhts

reserved

5.0 for UNIX.' Fundamentals

~~~.~..i-' " ,~f----

--'~)~l;;;;l-(~'~'

Layered Volumes

disk01

disk02

X
X
X

disk03

disk04

Volume Status

= su

IS

IS

Down
X

Up
Up
Up
Up

X
X
X

X
X
X

X
X = failed disk

Down

When two disks


: fail, the volume
_ survives 4/6, or
2/3 times.

Layered Volumes
The example illustrates a layered volume layout called a stripe-mirror layout. In
this layout, VxVM creates underlying volumes that mirror each subdisk. These
underlying volumes are used as subvolumes to create a top-level volume that
contains a striped plex of the data.
If two drives fail. the volume survives 4 out of6 (2/3) times. In other words, the
use of layered volumes reduces the risk of failure rate by 50 percent without the
need for additional hardware. As more subvolumes are added. the odds of a
volume surviving a two-disk failure approach 100 percent. For volume failure to
occur, both subdisks that compose a subvolume must tail. If a disk tails. only ihe
failing subdisk must be detached, and only thai pori ion of the volume loses
redundancy. When the disk is replaced, only a portion ofthe volume needs 10 be
recovered, which takes less time.
Failed
Subdisks

Volume Status
Stripe-Mirror

(Layered)

Mirror-Stripe

(Nontavered)

I and 2

Down

J)own

I and J

Up

Up

I and 4

Up

Down

2 and J

Up

Down

2 and 4

Up

Up

3 and 4

Down

Down

4-21

Lesson 4 Selecting Volume Layouts


Copyright

~: 2006

Svrnantec

Corporation

All rigtlls

reserved

symantcc

Terminology for Mirrored Layouts


The four types of mirroring in VxVM:
mirror-concat
(Non-layered, RAID-O+1)
- The top-level volume contains more than one plex
(mirror).
- Plexes are concatenated.
mirror-stripe
(Non-layered, RAID-O+1)
- The top-level volume contains more than one plex
(mirror).
- Plexes are striped.
concat -mirror
(Layered, RAID-1+0)
- The top-level volume is a concatenated plex.
- Subvolumes are mirrored.
stripe-mirror
(Layered, RAID-1+0)
- The top-level volume is a striped plex.
- Subvolumes are mirrored.

Layered Volume Layouts


In general, use regular mirrored layouts for smaller volumes and layered layouts
for larger volumes. By default in VxVM, a volume larger than I GB is created as a
layered volume, unless you specify otherwise. Before you create layered volumes,
you need to understand the terminology that defines the different types of mirrored
layouts in VxVM.
This layout mirrors data across concatenated plexes. The
concatenated plexes can consist of subdisks of different sizes. When you create
a simple mirrored volume that is less than I GB in size. a nonlayercd mirrored
volume is created by default.
mirror-concat:

stripe: This layout mirrors data across striped plexes. The striped
plexes can consist of different numbers of subdisks.
mirror-

concat -mirror: This volume layout contains a single plcx consisting ofone
or more concatenated subvolurnes. Each subvolume consists of two
concatenated plexes (mirrors), which consist of one ur more subdisks. I I' you
have two subdisks in the top-level plex, a second subvolumc is created, which
is used as the second concatenated subdisk of the plex. In the VEA interface,
the Gl.ll tcrm used for a layered. concatenated layout is Concatenated
Mirrored. These volumes require at least two disks.
stripe-mirror:
This volume layout stripes data across mirrored volumes.
The difference between stripe-mirror and concat-mirror is that the top-level
plex is striped rather than concatenated. Each mirrored subvolumc must have
the same number of disks. In the VEA interface. the GUI term used for a
layered. striped layout is Striped M inured. Striped Mirrored volumes require at
least four disks.

4-22

VERITAS
Copy

light

,~: ;!Ullti

Syruauter.

Storage Foundation
CorPOfil!IOJL

AlIlIght<;

reserved

5.0 for UNIX.' Fundamentals

symurnec

Creating Layered Volumes


VEA:
In the New Volume
Striped

Mirrored

vxassist

Wizard,

select Concatenated

as the volume

Mirrored

or

layout.

make:

vxassist
-g datadg
layout=stripe-mirror

make

datavol

109

vxassist
-g datadg
layout=concat-mirror

make

datavol

109

Note:
To create simple

mirrored

layout=mirror-concat

layout=mirror-stripe

volumes

(nonlayered),

you can use:

Creating a Layered Volume: VEA


In the New Volume wizard. select one of the two layered volume layout types:

Concatenated Mirrored: The Concatenated Mirrored layout refers to a


concat-mirror

volume.

Striped Mirrored: The Striped Mirrored layout refers to a stripe-mirror


volume.

Creating a Layered Volume: CLI


In the vxassist
types:

make syntax. you can specify any ofthe

following

layout

To create layered volumes:


layout=concat-mirror
layout=stripe-mirror
To create simple mirrored volumes:
layout=mirror-concat
layout=mirror-stripe
For striped volumes. you can specify other attributes, such as
ncol=number_of_columns
and s t ri peun i t e s i z e.

Lesson

4 Selecting Volume Layouts


Copyriqht <t 2006 Symantec Corporation All rights reserved

4-23

syrnantec

Viewing Layered Volumes


vxprint

-rt

volOl

Top-level
volume and plex

volOl

ENABLED

ACTIVE

...

pl

volOl-03

volOl

ENABLED

ACTIVE

...

Subvolume,
second-level
volume, plex,
and subvolume

sv

volOl-SOl

volOl-03

volOl-LOl

1. ..

v2

volOl-LOl

ENABLED

ACTIVE

...

p2

volOl-POl

volOl-LOl

ENABLED

ACTIVE

...

s2

datadg05-02

volOl-POl

datadg05

0 ...

p2

volOl-P02

volOl-LOl

ENABLED

ACTIVE

s2

datadg03-02

volOl-P02

datadg03

O ...

sv

volOl-S02

vol01-03

volOl-L02

1. ..

...

Viewing a Layered Volume: VEA


To view the layout of a layered \ olume, you can use any of the methods tor
displaying volume information. including the:
Object views in the main window
Disk View window
Volume View window
Volume to Disk Mapping window
Volume Layout window

Viewing a Layered Volume: ell


To view the contiguration of a layered volume from the command line, you use the
- r option of the vxprint command. Thc - r option ensures that subvolume
configuration information for a layered volume is displayed. The - L option is also
useful for displaying layered volume information when used with - r. - L displays
related records of a volume containing subvolumcs, but grouping is performed
under any volume.

4-24

VERITAS
Copyright

,::.

200fi Svmautec

Storage
Corporation

Foundation
All nqhts

reserved

5.0 for UNIX: Fundamentals

'S)111ilI1hX.

With storage attributes, you can specify:


Which storage devices are used by the volume
How volumes are mirrored across devices
When creating a volume, you can:
Include specific disks, controllers, enclosures,
targets, or trays to be used for the volume.
Exclude specific disks, controllers, enclosures,
targets, or trays from being used for the volume.
Mirror volumes across specific controllers,
enclosures, targets, or trays. (By default, VxVM
mirrors across different disks.)

Allocating Storage for Volumes


Specifying Storage Attributes for Volumes
YxYM selects the disks on which each volume resides automatically. unless you
specify otherwise. To create a volume on specific disks. you can designate those
disks when creating a volume. By specifying storage attributes when you create a
volume. you can:
Include specific disks. controllers. enclosures. targets. or trays to be used for
the volume.
Exclude specific disks. controllers. enclosures. targets. or trays from being
used lor the volume.
Mirror volumes across specific controllers. enclosures. targets. or trays. (By
default. YxYM does not permit mirroring on the same disk.)
By specifying storage attributes. you can ensure a high availability environment.
For example. you can only permit mirroring of a volume on disks connected to
different controllers and eliminate the controller as a single point of failure.
Note: When creating a volume. all storage attributes that you speci fy for use must
belong to the same disk group. Otherwise. YxYM does not use these storage
attributes to create a volume.

Lesson 4 Selecting Volume Layouts


Copynghl

4-25
20:)6

Symaruer.

Corporation

All nqtus

reserved

syrnantec

Storage Attributes:

Methods

VEA: In the New Volume wizard, select "Manually select


disks for use by this volume" and select the disks and
storage allocation policy.
ell:

Add storage attributes to vxassist

make:

vxassist

-g diskgroup
make volume_name length
[layout=layoutJ
[mirror=ctlr
I enclr I target]
[I] [storage_attributes

.~Exclude.

.. J

Disks: datadg02
C t II
E~:I~:U~::~ ctlr:

c2

enclr:emcl
Targets: target:

c2t4

Trays: c2 tray2

Mirror across controllers:


mirrormctlr
Mirror across enclosures:
mirror=enclr
Mirror across targets:
mirrormtarget

For example, to exclude all disks that are on controller cz:


vxassist

-g datadg

make datavol

Specifying Storage Attributes:

5g !ctlr:c2

VEA

You can specify that the volume is to be mirrored or striped across controllers.
enclosures. targets, or trays .

.-J M!rrorAcross:
~

~IrJpe Across:

.-J Qrdered

(.QnlrCi!ff

,co';;trOlier

I (I

r (J

CQl11rolier

-J'

Tray
1-~---~-ITarget
Enclosure

Note: A tray is a set of disks within certain Sun arrays, Note that this option may
not be available

011

other platforms.

To exclude a disk. controller. enclosure. target. or tray. you add the exclusion
symbol ( I ) before the storage attribute. For example. to exclude datadg02 from
volume creation. yuu use the format: ! datadg02.
For example. to create a volume on specific disks by creating a 5-GB volume
called datavol on datadg03 and datadg04:

vxassist

-g datadg

4-26

make datavol

5g datadg03

VERITAS Storage Foundation


Copyflghl

200b

Symantec

Corporation,

All nqtus

reservec

datadg04

5.0 for UNIX. Fundamentals

syrnanrec

Ordered Allocation
Ordered allocation enables you to control how columns
and mirrors are laid out when creating a volume.
With ordered allocation, storage administrators can
override the built-in allocation defaults.
VEA:
In the New Volume wizard, select "Manually select disks for use by
this volume." Select the disks and the storage allocation policy and
mark the Ordered check box.
'" go""'''

CLl:
Add the
vxassist
volume

-0

ordered option:
-9 diskgrotlp[-o

name length

ordered]

[layout=layout]

make
...

Specifying Ordered Allocation of Storage for Volumes


In addition 10 specifying which storage devices YxYM uses 10 create a volume.
you can also specify how the volume is distributed on the specified storage. By
using the ordered allocation feature ofYxYM. you can control how volumes are
laid out on speci fied storage.
For example. if you are creating a three-column mirror-stripe volume using six
specified disks. YxYM creates column I on the first disk. column 2 on the second
disk. and column 3 on the third disk. Then. the mirror is created using the fourth,
fifth, and sixth specified disks.
Without the ordered allocation option. Vx VM selects disks in several ways.
including the following:
vxconf

igd selects a disk in the group which has no subdisks.

igd selects subdisks for a striped plcx from disks already associated
into striped plcxes rather than disks associated into concat plcxcs.

vxconf
vxconf

igd selects a disk with an existing log plex for the log plcx of another

volume.
YxYM has default methods for space allocation. as indicated by the f sgen
UTYPEin the vxprint
output. Storage administrators can override the built-in
defaults:
First, YxYM concatenates subdisks in columns.
Secondly. YxVM groups columns in striped plcxcs.
Finally. YxYM forms mirrors.
Use the

-0

ordered

option to the vxassist

make command.

4-27

Lesson 4 Selecting Volume Layouts


Copyri9ht:f;

20U6

Syrnantec

Corporation.

All rights

reserved

symantec

Ordered Allocation:

Example

Specifying the order of columns:


vxassist

-g

datadg

-0

ordered

make datavol
2g layout=stripe
\
neol=3
datadg03
datadg02
datadgOl

Without using ordered allocation:


(No guarantee of disk order)

vxassist
-g datadg
\
make datavol
2g layout=stripe
\
neol=3
datadg03
datadg02
datadgOl

Example I: Order of Columns


To create a IO-GB striped volume, called datavol,

with three columns striped

across three disks:


vxassist
-g
layout=stripe

datadg
neol=3

-0

ordered
datadg03

make datavol
109
datadg02
datadgOl

Because the -0 ordered option is specified, column I is placed on datadg03,


column 2 is placed on datadg02. and column 3 is placed on datadgOl.
Without the - 0 ordered option. column

I would be placed on da tadgO 1. and

so on.

Example 2: Order of Mirrors


To create a mirrored
vxassist
-g
layout=mirror

volume using datadg02 and datadg04:

datadg
-0 ordered
make
datadg04
datadg02

datavol

109

Because the -0 ordered option is specified, the first mirror is placed on


datadg04, and the second mirror is placed on datadg02. Without this option.
the first mirror could be placed on either disk.

Note: There is no logical difference between the mirrors. However, by controlling


the order of mirrors, you can allocate plex names with specific disks (for example,
datavol- 0 1 with datadg02 and datavol- 02 with datadg04). This level of
control is significant

when you perform mirror breakoff and disk group split

operations. You can establish conventions

that indicate to you which specific disks

are used for the mirror breakoff operations.

VERITAS Storage Foundation

4-28
Copyright

,c

2006

Svmautec

Corporation

AlllIghls

reservec

5.0 for UNIX: Fundamentals

sym.uuec.

Lesson Summary

Key Points
This lesson described the advantages and
disadvantages of volume layouts supported by
VxVM. You learned how to create concatenated,
striped, mirrored, and layered volumes. In addition,
you learned how to allocate storage for a volume
by specifying storage attributes and ordered
allocation.

Reference Materials
- VERITAS Volume Manager Administrator's
- VERITAS Storage Foundation

Guide

Release Notes

'symantct:,

Lab 4
Lab 4: Selecting Volume Layouts
In this lab, you create simple concatenated
volumes, striped volumes, and mirrored
volumes.
You also practice creating a layered volume and
using ordered allocation while creating volumes.

For Lab Exercises,


For Lab Solutions,

see Appendix
see Appendix

Labs and solutions tor this lesson arc located on the following
Appendix A provides complete lab instructions,
lavoutv." P;I1!C ,\21

A.
B.

pages:

"1,Clh .1: Sc:kC:llli"

\',,:UI11<:

Appendix B provides complete lab instructions and solutions, "1;11, -+ '.;olnt;"I\',:


Sl..'lecting v'olumc IJ\Olih," pdg;,:lL ~:~

4-29

Lesson 4 Selecting Volume Layouts


Copyright

c: 2006

Syrnantec

Corporation.

Ali right"

rc serveo

4-30

VERITAS
CopyJlytl1

~ 20Dti

Storage

Svrnantec Corporatron.

Foundation
All flotlls

reservoc

5.0 for UNIX Fundamentals

Lesson 5
Making Basic Configuration Changes

symantcc

Lesson Introduction
Lesson 1: Virtual Objects
Lesson 2: Installation and Interfaces
Lesson 3: Creating a Volume and File
System
Lesson 4: Selecting Volume Layouts
Lesson 5: Making Basic Configuration ~
gfJi:lrJ{1f!~W""""W"_~

Lesson 6: Administering File Systems


Lesson 7: Resolving Hardware
Problems

.Ab'l[dl! .

~~'.%~'k&

l($;tl!c.

'symanlLT

Lesson Topics and Objectives


After completing this lesson, you will
be able to:

Topic

5-2

Topic 1: Administering
Mirrored Volumes

Add a mirror to and remove a mirror from an


existing volume, add a log, and change the
volume read policy.

Topic 2: Resizing a Volume

Resize an existing volume by using VEA and


from the command line.

Topic 3: Moving Data Between


Systems

Deport a disk group from one system and


import it on another system.

Topic 4: Renaming Disks and


Disk Groups

Rename disks and disk groups.

Topic 5: Managing Old Disk


Group Versions

Upgrade disk groups and convert non-CDS


disk groups to CDS.

VERITAS Storage Foundation

5.0 for UNIX: Fundamentals

Example

Array Structure

LUNs are a virtual presentation. Therefore, you have to


take the array configuration into account to understand
where the ac;:~.!!~data
is placed.
Array with

~it~~UU
U U U U 00 ;~'::~'~'
10 0 [j 0 [j [1
r",,,

~;:a~~D groups

Twelve array- "


based LUNs.
~

Administering

-.

Mirrored Volumes

Example Array Structure


In an array. the LUNs are a virtual presentation. Therefore. you cannot know
where in the array the actual data will be put. That means you have no control over
the physical conditions.
The array in the slide contains slots for 14 physical disks. and the configuration
places 12 physical disks in the array. These physical disks arc paired together into
(, mirrored RAID groups. In each RAID group. 12 logical units. or LUNs, arc
created. These LUNs appear to hosts as SAN-based SCSI disks. The remaining
two disks are used as spares in case one of the active disks fails.

Lesson

5 Making Basic Configuration

Changes

Copyright ':9 2006 Syrnantec Corporation All rights reserved

5-3

syrn.mtec

When to Add a Mirror to a Volume

To add redundancy if it is
not provided at the
hardware

level.

Old Array

disk
point
by
arrays.

I"
J

SAN

(/1

LJ"~I,..!ew.
N
..
~._

To eliminate the
array as a single
of failure (SPOF)
mirroring across

Array.

To provide disaster
Original Data
Mirror of Data
recovery across sites when
there is a SAN connecting two
I Migrating Data
or more sites.

To improve concurrent read performance


mirrors with different 1/0 paths.

To migrate data from one array to another.

When

to Add a Mirror

by adding

to a Volume

Without Storage Foundation, moving data from one array to another requires
downtime. Using Storage Foundation. you can mirror 10 a new array, ensure it is
stable, and then remove the plexes from the old array, No downtime is necessary,
These are the steps for migrating data using Storage Foundation:
1

Add new array to SAN.

Mirror volumes to new array.

Remove plexes/LUNs from old array.

Remove old array.

This is useful in many situations, for example, if a company purchases a new array.
With Storage Foundation, yuu:
1

Add the new array to the SAN.

Zone for the server to see the LUNS.

Rcscan with VEA.

Add the LUNs from the new array to the disk group.

Mirror the volumes to the new army.

Remove the plcxes on the old array.

Remove the LUNS that are on the old array from the disk group.

This method does not require downtime.

5-4

VERITAS

Storage Foundation

5.0 for UNIX: Fundamentals

Adding and Removing Mirrors to a Volume


Adding a Mirror:

Only concatenated

By default, a mirror is created with the same plex layout


as the original volume.

or striped

volumes

Each mirror

All disks must be in the same disk group.

Adding

A volume

must reside on separate

can be mirrored.

disks.

~~

can have up to 32 plexes, or mirrors.

a mirror requires

plex synchronization.

Removing a Mirror:
When a mirror is removed, the space
occupied by that mirror can be used
elsewhere.

Adding and Removing Mirrors


I I' a volume was not originally created as a mirrored volume. or i I' you want to add
additional mirrors. you can add a mirror to an existing volume.
By default, a mirror is created with the same plex layout as the plex already in the
volume. For example. assume that a volume is composed of a single striped plex.
If you add a mirror to the volume. VxYM makes that plex striped. as well. You can
specify a different layout using YEA or from the command line.
A mirrored volume requires at Icast two disks. You cannot add a mirror to a disk
that is already being used by the volume. A volume can have multiple mirrors. as
long as each mirror resides on separate disks.
Only disks in the same disk group as the volume can be used to create the new
mirror. Unless you specify the disks to be used for the mirror. Vx Vlvl
automatically locates and uses available disk space to create the mirror.
A volume can contain up to .,2 plcxcs (mirrors): however. the practical limit is :\ I.
One plex should be reserved for use by YxYM for background repair operations.
Removing a Mirror
When a mirror (plex) is no longer needed. you can remove it. You can remove a
mirror to provide free space. to reduce the number of mirrors. to remove a
temporary mirror.
Caution: Removing a mirror results in loss of data redundancy. If a volume only
has two plcxes. removing one ofthem leaves the volume unmirrorcd.

Lesson

5 Making Basic Configuration

Changes

Copyright 1: 2006 Svmantec Corporation All rights reserved

5-5

symantec

Adding/Removing

Mirrors

VEA:
SelectActions->Mirror->Add.
SelectActions->Mirror->Remove.
vxassist

mirror:

vxassist

-g

diskgroup

mirror

volume

name

[layout=layout~_
type]
[di sk _ name]
vxassist
-g datadg
mirror
datavol
vxassist

remove

vxassist

mirror:
remove mirror

-g cti ekoroup

To remove the plex that contains


-g datadg

vxassist

remove

vojume name

a subdisk

mirror

L! l dm~nal1le

from the disk datadg02:

datavol

!datadg02

To remove the plex that uses any disk except datadg02:


-g datadg

vxassist

remove

mirror

datavol

datadg02

Adding a Mirror: VEA


Select:

Ihe volume to be mirrored

Navigation path:

Actions- ~>Mirror->;\dd
Number of mirrors

Input:

to add: Type a number. Default is I.

Choose the layout: Select from Concatenated or Striped.


Select disks to use: Vx VM can select the disks. or you can
choose specific disks. You can also mirror or stripe across
controllers. trays. targets. or enclosures.
To verify that a new mirror was added. view the total number of copies of the
volume as displayed

in the main window.

The total number of copies is increased

by the number of mirrors added.

Adding a Mirror: CLI


To add a mirror onto a specific disk. you specify the disk name in the command:
vxassist

-g

datadg

mirror

datavol

datadg03

Removing a Mirror: CLI


To remove a mirror. use vxassist

-9 diskgroup

vxassist

remove mirror:

remove

mirror

rm dis

plex_name

volume

name

You can also use vxplex:


vxplex

5-6

-g

diskgroup

-0

VERITAS

Storage Foundation

COPYllgtH ';/2006 Symantec Corpor aucn. All nghts re serveo

5.0 for UNIX: Fundamentals

~.

symarue

Adding a Dirty Region Log (DRL) to a Volume


Log keeps track of changed regions.
If the system fails, only the changed regions of the volume
must be recovered.
Not enabled by default. When enabled, one log is created.
You can create additional logs to mirror log data.
VEA:
Actions->Log->Add
Actions->Log->Remove
vxassist
vxassist

-g diskgroup
{nlo9:::on)
[attributes)

addleg

volume

vxassist

-9 diskgroup

vxassist

-9

datadg

addlog

datavol

vxassist

-9

datadg

remove

log

Adding

remove log

name [logtype=drl]

vol ume

logtype~drl

nlog=2

datavol

a Log to a Volume

Logging

in VxVM

By enabling logging. YxYM tracks changed regions of a volume. Log information


can then be used to reduce plcx synchronization times and speed the recovery of
volumes after a system failure. Logging is an optional feature, but is highly
recommended. especially for large volumes.
Dirty Region

Logging

Dirty region logging (DRL) is used with mirrored volume layouts. DRL keeps
track of the regions that have changed due to JlO writes to a mirrored volume.
Prior to every write. a bit is set in a log to record the area of the disk that is being
changed. In case of system failure. DRL uses this information to recover only the
portions of the volume that need to be recovered.
IfDRL is not used and a system failure occurs, all mirrors of the volumes must be
restored to a consistent state by copying the full contents of the volume between its
mirrors. This process can be lengthy and I/O intensive.
When you enable logging on a mirrored volume. one log plex is created by default.
The log plex uses space from disks already used for that volume, or you can
specify which disk to use. To enhance performance. you should consider placing
the log plex on a disk that is not already in use by the volume.
To create a volume that is mirrored and logged:
vxassist
logtype=drl

-g

datadg

make

datavol

Sm layout=mirror

Lesson 5 MakingBasic ConfigurationChanges


CDpyright F-2006 swneotec Corporation. All riqtllS reserved

5-7

svrnantcc.

Volume Read Policies

Round Robin

I Preferred

Plex

Olume

Read
110
Rea
110

..

. ..

Read

I/O

Selected Plex
Read
110
Is there a
striped plex?

Default Method

Siteread

r-,

Volume...,
Read I/O , ....

1'-'

from host:
at Site A

.~
"-

.-'

I
.G~~~~~~
.....
,
R~
H

Site
A

!.... ~

t=J f; t::::j

Site
B

.....

:
:

Volume Read Policies with Mirroring


One of the benefits of mirrored volumes is thai you have more than one copy of the
data from which to satisfy read requests. The read policy for a volume determines
the order in which plexes are accessed during 1/0 operations.
VxVM

has three read policies the

YOIl

can specify to satisfy read requests:

Round robin: VxVM reads each plcx in turn in "round-robin"


manner lor
each nonsequential 1:0 detected. Sequential access causes only one plex to be
accessed in order to take advantage of drive or controller read-ahead caching
policies. If a read is within 256K ofthe previous read. then the read is sent to
the same plcx.
Preferred plcx: Vx VM reads first from a plex that has been named as the
preferred plcx. Read requests are satisfied from one specific plex, presumably
the plcx with the highest performance. Ifthe preferred plex fails, another plex
is accessed. For example. if you are mirroring in a campus environment and
the local plcx would he faster than the remote one. setting the local plex as the
preferred plcx would increase performance.
Selected plex: This is the default read policy. Under the selected plex policy,
Volume Manager chooses an appropriate read policy based on the plex
configuration to achieve the greatest 1/0 throughput. If the mirrored volume
has exactly one enabled striped plex, the read policy defaults to that plex;
otherwise. it defaults to a round-robin read policy.
Sitcrcad:

VxVM

reads preferentially

from plcxcs at the locally defined site.

This is the default policy for volumes in disk groups where site consistency has
been enabled.

VERITAS Storage Foundation

5-8
CUPYrlght

20% Svmantec

Corporation

All n91'16reserved

5.0 for UNIX: Fundamentals

symamec.

Setting the Volume Read Policy


VEA:

Actions->Set

Select from Based on layouts,

vxvol
vxvol

Volume

Usage
Round robin, or Preferred.

rdpol:
diskgroup

-g

rdpol

policy

volume name [plexl

Examples:

To set the read policy to round robin:

To set the read policy to read from a preferred

plex:

vxvol

datavol-02

vxvol

-g

-g

datadg

datadg

rdpol

rdpol

round

datavol

prefer

datavol

To set the read policy to select a plex based on layouts:


vxvol

-g

datadg

rdpol

select

datavol

Changing the Volume Read Policy: VEA


Select:

A volume

Navigation path:

Actions->Set

Input:

Volume read policy: Select Based on layouts (default: the


selected plex method). Round robin. Site local read, or Preferred.
I r you select Preferred. then you can also select the preferred plcx
from the list of available plexes,

Volume Usage

Changing the Volume Read Policy: CLI


vxvol

Lesson

diskgroup

rdpol

round

vxvol
-9 diskgroup
preferred_plex

-9

rdpol

prefer

volume

name

vxvol

rdpol

select

volume

name

-9 diskgroup

5 Making Basic Configuration

volume

5-9

Changes

Copyright:~ 2006 Symantec coro.euoo

name

All flqllt~ -eserveo

symantec

Resizing a Volume
To resize a volume, you can:

g-

Specify a desired new volume size.

Add to or subtract from the current volume size.


Disk space must be available.

VxVM assigns disk space, or you can specify disks.

L_

~
t.....__..-J

Shrinking a volume enables you to use space


elsewhere. VxVM returns space to the free
space pool.

If a volume is resized, its file system must also be resized.


VxFS can be expanded or reduced while mounted.

UFS/HFS can be expanded, but not reduced. HFS needs to be


unmounted to be expanded.

Ensure that the data manager application

supports

resizing.

Resizing a Volume
Resizing a Volume
I f users require more space on a volume. you can increase the size of the volume.
Ifa volume contains unused space that you need to use elsewhere, you can shrink
the volume.
When the volume size is increased. sufficient disk space must be available in the
disk group. When increasing the size of a volume. VxVM assigns the necessary
new space from available disks. By default, VxVM uses space from any disk in the
disk group, unless you define specific disks.

Resizing a Volume with a File System


Volumes and file systems arc separate virtual objects. When a volume is resized,
the size of the raw volume is changed. Ifa tile system exists that uses the volume,
the file system must also be resizcd.
When you resize a volume using VEA or the vxres i ze command, the tile system
is also resized.

Resizing Volumes with Other Types of Data


For volumes containing

data other than file systems. such as raw database data.

you must ensure that the data manager application

can support the resizing of the

data device with which it has been configured.

VERITAS

5-10
Copyright

;: 2006

Svmantec

Storage Foundation
Corporauon

AU lights

reserved

5.0 for UNIX: Fundamentals

symamec

Resizing a Volume: Methods

Method

What Is Reslzed?

VEA

Both volume and file system

vxresize

Both volume and file system

vxassist

Volume only

fsadm

File system only (VxFS only)

Resizing a Volume and File System: Methods


To resize a volume from the command line. you can use either the vxassist
command or the vxresize command. Both commands can expand or reduce a
volume 10 a specific size or bv a specified amount of space. with one significant
difference:
vxres i ze automatically resizes a volume's f Ie system.
vxassist

does not resin? a volume's tile system.

When using vx a s s i s t , you must resize the tile system separately by using the
f sadm command.
When you expand a volume. both commands automatically locate mailable disk
space unless you designate specific disks to use. When you shrink a volume, the
unused space becomes free space in the disk group.
When you resizc a volume. you can speci fy the length of a new volume in sectors.
kilobytes. megabytes. or gigabytes. The unit of measure is added as a suffix to the
length (s, k, m, or g). Ifno unit is specified, the default unit is sectors.

Lesson

5 Making Basic Configuration


Copyriyhl~'

5-11

Changes
2006

Symantec

Corporation

All rights

reserved

syruantcc

Resizing a Volume:
I Highlight

VEA

a volume, and select Actions->Resize

Volume name:
Current volume size:
Add by:
5ubtract by:
New volume size;

datavoiOI

1500
1

I
I

Volume.

Specify the amount


of space to add or
subtract, or specify a
new volume size.

fMB-::J
fMB-::J
fMB-::J

Max Size

r.

Let Volume Manager decide what disks to use for this volume [If. desired, specify

Manually select disks for use by this volume

J disks

to be used
for the additional
space.

Resizing a Volume and File System: VEA


Select:

The volume to be resized

Navigation path:

Actions -->Resize Volume

Input:

Add by: To increase the volume size by a specific amount of


'pace. input how much space should be added to the volume.
Subtract by: To decrease the volume size by a specific amount of
space. input how much 'pace should be removed.
New volume size: To speci fy a new volume size. input the size.

Max Size: To determine the largest possible size. click Max Size.
Select disks for use by this volume: You can select specific
disks to use and specify mirroring and striping options.
Force: You can force the rcsize if the size is being reduced and
is active.

the volume

Notes: When you resize a volume. if a YERITAS file system (YxFS) is mounted
on the volume. the file system is also rcsized. The tile system is not resized if it is
unmounted.

5-12

VERITAS

Storage

Foundation

5.0 for UNIX: Fundamentals

,S)111ank'C.

Resizing a Volume: vxresize


vxresize
[+

I-J

Original

[-bJ

fs_tlpe

-g diskgroup

volume

narne

new_length
volume

size: 10 MB

vxresize

-g

mydg

myvol

SOm

vxresize

-g

mydg

myvol

+lOm

vxresize

-g

mydg

myvol

40m

vxresize

-g

mydg

myvol

-10m

Resizing a Volume and File System: vxresize


The new_l ength operand can begin with a plus sign (+) to indicate that the new
length is added to the current volume length. ;\ minus sign ( - ) indicates that thc
new length is subtracted. - b runs the process in the background.
The ability to expand or shrink a file system depends on the tile system type and
whether the tile system is mounted or unmounted.
File System Type

Mounted FS

Unmounted FS

VxFS

Expand and shrink

'101 allowed

UFS (Solaris)

Expand only

Expand only

HFS (llP-UX)

\/01 allowed

Expand only

Example: The size of the volume myvol is 10MB. To extend myvol to 50 MB:
vxresize

-g mydg myvol

50m

To extend myvol by an additional 10 MB:


vxresize

-g mydg myvol

+10m

To shrink myvol back to a length 01'40 MB:


vxresize

-g mydg myvol

40m

To shrink myvol by an additional 10MB:


vxresize

Lesson

-g mydg myvol

5 Making Basic Configuration

-10m

Changes

Copyrigtll:E 2006 Syrnantcc Corporation. All rights reserved.

5-13

syrnantec

Resizing a Volume: vxassist


vxassist

diskgrollp

-g

{growtolgrowbylshrinktol

volume_name

shrinkby}

size

Original volume size: 20 MB

CD
~

vxassist

-g

datadg

growto

datavol

40m

vxassist

-g

datadg

growby

datavol

10m

vxassist

-g

datadg

shrinkto

datavol

30m

vxassist

-g

datadg

shr

datavol

10m

nkby

Resizing a Volume Only: vxassist


q rowt o

Increases volume

growby

Increases volume hi' specified amount

shrinkto

Reduces volume

shrinkby

Reduces volume 1>.1' specified amount

(0

specified length

specified length

(0

Resizing a File System Only: fsadm


You may need to resize a tile system to accommodate

a change in use-for

example, when there is an increased need for space in the file system. You may
also need to resize a tile system as part of a general reorganization of disk
usage-for
example, when a large tile system is subdivided into several smaller
file systems. You can resize a VxFS tile system while the tile system remains
mounted by using the f sadm command:
fsadm

[-b

newsizel

[-r

rawdevl

mount

point

Using fsadm to resizc a tile system does not automatically resize the underlying
volume. When you expand a tile system. the underlying device must be large
enough to contain the new larger file system.

VERITAS

5-14
CCP'i'flght:,

,nn6 Symantcc

Storage
Corporation

All

Foundation
fights reserved

5.0 for UNIX: Fundamentals

S}11liHH.X

Resizing a Dynamic LUN

If you resize a LUN in the hardware, you should


VxVM disk corresponding
to that LUN.

Disk headers and other VxVM structures


reflect the new size.

Intended

for devices

resize the

are updated

that are part of an imported

to

disk group.

VEA:

Select the disk that you want to expand.

Select Actions->Resize

Disk.

ell:
vxdisk

[-fJ

-g

diskgrollp

resize

dm nAme

Example:
vxdisk

-g

datadg

resize

datadgOl

Resizing a Dynamic LUN


When you resize a LUN in the hardware. you should resize the YxYM disk
corresponding to that LUN. You can use vxd i sk res i ze to update disk headers
and other VxYM structures to match a new LUN size. This command does not
resize the underlying LUN itself.

Lesson

5 Making Basic Configuration


Copyright

5-15

Changes
tt'

2006

Symantec

Corpo-auon

All riqhts

reserved

, Computer A

r: Computer

,------;-----,

acctdg

engdg

l'CJJ
, Additional

Disks

Moving Data Between Systems


Example: Disk Groups and High Availability
The example in the diagram represents a high availability

environment.

In the example, Computer A and Computer B each have their own bootdg on
their own private SCSI bus. The two hosts are also on a shared SCSI bus. On the
shared bus, each host has a disk group, and each disk group has a set of VxVM
disks and volumes. There are additional disks on the shared SCSI bus that have not
been added to a disk group.
If Computer A fails. then Computer B. which is on the same SCSI bus as disk
group acctdg. can take ownership or control of the disk group and all of its
components.

5-16

VERITAS

Storage Foundation

Copyright - /.u1J6Syruantec Corporation

All

flghls reserved

5.0 for UNIX: Fundamentals

~lk
..
"
&i

,~wP .)~

Deporting a Disk Group


What is a deported disk group?

The disk group and its volumes


unavailable.

The disks cannot

The disk group cannot


until it is imported.

Before deporting

Unmount

Stop volumes.

are

be removed.
be accessed

a disk group:

file systems.

When you deport a disk


group, you can specify:

A new host

A new disk group name

Deporting a Disk Group


A deported disk group is a disk group over which management control has been
surrendered. The objects within the disk group cannot be accessed. its volumes are
unavailable. and the disk group configuration cannot be changed. (You cannot
access volumes in a deported disk group because the directory containing the
device nodes for the volumes are deleted upon deport.) To resume management of
the disk group. it must be imported.
A disk group cannot be deported i r any volumes in that disk group arc in use.
Before you deport a disk group. you must unmount file systems and stop any
volumes in the disk group.
Deporting and Specifying a New Host
When you deport a disk group using YEA or CLI commands, you have the option
to speeify a new host to which the disk group is imported at reboot. I r you know
the name of the host to which the disk group will be imported. then you should
specify the new host during the operation. If you do not specify the new host. then
the disks could accidentally be added to another disk group. resulting in data loss.
You cannot specify a new host using the vxdiskadm utility.
Deporting and Renaming
When you deport a disk group using YEA or CLI commands. you also hale the
option to rename the disk group when you deport it. You cannot rename a disk
group when deporting using the vxdiskadm utility.

Lesson

5 Making Basic Configuration

Changes

Copyright G 2006 Svrnantec Corporation. Alilighis reserved

5-17

symaruec

. Select Actions->Deport

Disk Group.

: Disk group to
be deported
datadg

lid

Dellor! options
New name: r+:

r-

N8/hOst:

~I
vxdiskadm: "Remove access to (deport) a disk group"
vxdg

[-n

new_name]

[-h

hostname]

deport

diskgroup

Deporting a Disk Group:


Disks that were in the disk group now have a state of Deported. If the disk group
was deported to another host. the disk state is Foreign.
Note: If you amine the disks. you must manually online the disks before you
import the disk group. To online a disk. use vxdiskadm option "Enable (online) a
disk device."
Before deporting a disk group. unmount all tile systems used within the disk group
that is to be deported. and stop all volumes in the disk group:
umount
vxvo1

mount
-g

po~nt

diskgroup

5-18

stopa11

VERITAS
Copyngt1\

C' 21106

Svmautec

Storage Foundation
Corporauon

All nqbts

rescrveo

5.0 for UNIX: Fundamentals

Importing a Disk Group


Importing a disk group
reenables access to the
disk group.
When you import a disk
group, you can:
Specify a new disk group
name.
Clear host locks.

Importing

a Deported

Import as temporary.

Force an import.

Disk Group

All volumes are stopped by default after importing


started before data can be accessed.

a disk group and must be

Importing and Renaming


A deported disk group cannot be imported ifanother disk group with the same
name has been created since the disk group was deported. You can import and
rename a disk group at the same time.

Importing and Clearing Host Locks


When a disk group is created. the system writes a lock on all disks in the disk
group. The lock ensures that dual-ported disks (disks that can be accessed
simultaneously by two systems) are not used by both systems at the same time. If a
system crashes. the lucks stored on the disks remain. and if you try to import a disk
group containing those disks. the import fails.

Importing As Temporary
A temporary import does not persist across reboots. A temporary import can be
useful, for example. if you need to perform administrative operations on the
temporarily imported disk group. VEA docs not support temporary import.

Forcing an Import
A disk group import fails if the VxVM configuration daemon cannot rind all ofthe
disks in the disk group. If the import tails because a disk has failed. you can force
the disk group to be imported. Forcing an import should always be performed with
caution.

Lesson 5 Making Basic Configuration Changes


Copyright

rs 20u6

Symantec Corporation All rights reserved

5-19

symantec

Importing a Disk Group


, Select Actions->Import

Disk Group.

Group name: datadg,1155041Sn64.coursedev6


Options

blew name: IClalaclg

=cc"-----

SileName

Options include:

[] Clear hosllD

Clearing host IDs


at import

[] Eorce

Forcing an import
Starting all volumes

~ Start all ~olumes

vxdiskadm: "Enable access to

(import)

a disk group"

vxdg [-ftCl
[-n nelv_namelimport dJskgroup
vxvol -g diskgroup
startall

Importing a Disk Group:


By default, when you import a disk group by using VEA, all vol limes in the disk
group are started automatically. By default. the vxdiskadm import option starts
all volumes in the disk group. When yuu import a disk gruup from the command
line, you must manually start all volumes
A disk group must be deported from its previous system before it can be imported
to the new system. During the import operation. the system checks for host import
lucks. Ifany lucks are found. you are prompted to clear the locks.
To temporarily rename an imported disk group. you use the - t option. This option
imports the disk group temporarily and docs not set the autoimport
flag, which
means that the import cannot survive a reboot.
To display all disk groups, including deported disk groups:
VXdlSk

5-20

-0

alldgs

list

DEVICE

TYPE

DISK

GROUP

STATUS

clt2:iOs2

auto:cdsdisk

datadgOl

datadg

online

clt2dls2

auto:cdsdisk

(acctdg)

VERITAS

Storage

Foundation

CopYrlght'~ 2006 Syruanter Ccrpo.auoo All lights reserved

5,0 for UNIX'

online

Fundamentals

VEA:
o

o
o

Select the disk that you want to rename.


Select Actions->Rename Disk.
Specify the original disk name and the new name.

vxedit
vxedit

rename:
-g diskgroup

rename

old

name

new

name

Example:
vxedit

-g

datadg

rename

datadgOl

datadg03

Notes:
o
o

The new disk name must be unique within the disk group.
Renaming a disk does not automatically rename subdisks on
that disk.

Renaming Disks and Disk Groups


Changing the Disk Media Name
VxVM creates a unique disk media name for a disk when you add a disk to a disk
group. Sometimes you may need to change a disk name to reflect changes or
ownership or use or the disk. Renaming a disk does not change the physical disk
device name. The new disk name must be unique within the disk group.
Before You Rename a Disk
Before you rename a disk, you should carefully consider the change. VxVM
names subdisks based on the disks on which they are located. J\ disk named
datadgOl contains subdisks that are named datadgOl-Ol, datadgOl-02,
and so on. Renaming a disk does not automatically rename its subdisks. Volumes
are not affected when subdisks arc named differently trom the disks.

Lesson 5 Making Basic ConfigurationChanges


Copyrtght ~ 2006 Svrnantec Corporation

5-21
AII'iqlm. reserved

symantcc.

Host A

Deport

"

vxdg

-n new~nafile

vxdg

import

deport

new name

-------In VEA, select Actions->


: Rename Disk Group
I

"~

vxdg deport

old

vxdg -n new_name

name
import

Renaming a Disk Group


You cannot import or deport a disk group when the target system already has a disk
group of the same name. To avoid name collision or to provide a more appropriate
name for a disk group, you can rename a disk group.
To rename a disk group when moving it from one system to another, you
specify the new name during the deport or during the import operations.
To rename a disk group without moving the disk group, you must still deport
and reimport the disk group on the same system.
The YEA interface has a Rename Disk Group menu option. On the surface, this
option appears to be simply renaming the disk group. However, the option works
by deporting and rcimporting the disk group with a new name.
Using the CLI, for example,
vxdg

-n mktdg

vxdg

import

vxvol

deport

10

rename the disk group datadg to mktdg:

datadg

mktdg

-g mktdg

startall

or
vxdg

deport

vxdg

-n mktdg

vxvol

datadg

-g mktdg

import

datadg

startall

From the command line, you must restart all volumes in the disk group:
vxvol

5-22

-9 new name startall

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Cu~ynyht:; 20l~, ':':".panlec

Corporauou All r.glils rescr-..


eo

'symantt'(

All disk groups have a version number based on the Storage


Foundation release.
Each disk group version supports a set of features. You
must upgrade old disk group versions in order to use new
features.
Supported Disk
SF
Disk Group
Release
Version
Group Versions
3.2,3.5
4.0
4.1
5.0

20-90
20-110
20-120
20-140

90
110
120
140

To upgrade the disk group version:


In VEA, select the disk group to be upgraded then select
Actions->Upgrade Disk Group Version.
In CLI, type:
vxdg

[-T

version]

upgrade

diskgroup

Managing Old Disk Group Versions


Upgrading a Disk Group
All disk groups have an associated version number. Each VxVM

release supports a

specific set of disk group versions and can import and perform tasks on disk
groups with those versions. Some new features and tasks only work on disk groups
with the current disk group version. so you must upgrade existing disk groups in
order to perform those tasks.
Once you upgrade a disk group. the disk group becomes incompatible with earlier
releases ofVxVM
that do not support the new version. Upgrading the disk group
version is an online operation. You cannot downgrade a disk group version.

Displaying the Disk Group Version:


In the VEA Disk Group Properties window. if the Current version property is Yes.
then the disk group version is current.
In CLI. type:
vxdg
Group:
dgid:

version:

list

newdg
newdg

97121640B.1133.cassius

140

Lesson 5 Making Basic ConfigurationChanges


Copyriglll ', 2006 Symantec Corporation. All righls reserved

5-23

symantcc

CDS Disk Groups


CDS disk groups are used for seamless transfer of data
between different platforms. For example, for moving
copies of data to a backup server that is on a different OS.
CDS disk groups are created by default as of VxVM 4.x.
Disk groups created before version 4.x are non-CDS.

~~~~

datadg

CDS attribute: c ds eon __1


version=110 (or higher)

II DG version:

A CDS disk group cannot have non-CDS disks in it.


However, a CDS disk can be added to a non-CDS disk
group as long as the disk group version supports it.

Requirements

for CDS Disk Groups

Thc CDS attribute indicates that the disk group can be shared across platforms.
CDS disk groups have fields indicating which platform-type created the disk group
and which platform-type last imported the disk group, in addition to device quotas.

5-24

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Ccpynyht ': 200b Symautec Cosporanco. All fights reserved

'symanl('(
Converting a Non-CDS Disk Group to a CDS Disk Group
The disk group must be in good condition .
Disk groups can be converted while online or offline.
Use the CDS conversion utility vxcdsconvert
to convert a
VxVM non-CDS disk group to a CDS disk group:
vxcdsconvert
[-A]
[-d defaultsfile]
-g diskgroup
[-0 novolstop]
alignmentlalldisksldisk
namelgroup
[attribute]
For example, to convert the disk group olddg to a CDS
disk group while its volumes are still online, type:
vxcdsconvert

Converting

-g

a Non-CDS

olddg

-0

novolstop

group

Disk Group to a CDS Disk Group

Requirements for Converting a Non-CnS

Disk Group to a CDS Disk Group

The disk group must be in good condition:


No dissociated or disabled objects
No sparse plexes
No volumes requiring recovery or having pending snapshot operations
No objects in an error state
Disk groups can be converted online or online:
Performing the conversion online. while use of the disk group continues. may
greatly increase the amount of time required for conversion.
Performing the conversion offline requires minimal online time.
What Happens When a Disk Group Is Convertcd?
The following

are some other (actors to consider when converting a disk group:

II non-CDS disk group is upgraded (using the vxdg

upgrade

command).

I I' the non-CDS disk group has one or more disks that are not CDS disks. these
disks are converted to CDS disks.
If the non-CDS disk group does not have a CDS-compatible disk group
alignment. the objects go through relayout so that they are CDS-compatible.
Applications using disks that require format conversion are terminated for the
duration of the disk conversion process (unless novol stop is used).
Using novolstop

may require objects to be evacuated and then unrelocatcd.

Lesson 5 Making Basic Configuration


COPYright

Changes
e. 2006

Syrnantec Corporation. All rigtw:; reserved.

5-25

symantec

Lesson Summary

Key Points
This lesson described how to add a mirror to and
remove a mirror from an existing volume, change
the volume read policy, and resize an existing
volume. You also learned how to rename disks
and disk groups, upgrade disk groups, and
convert non-CDS disk groups to CDS.

Reference Materials
- VERITAS Volume Manager Administrator's
- VERITAS Storage Foundation

Guide

Release Notes

svmantec

Lab 5
Lab 5: Making Basic Configuration Changes
This lab provides practice in making basic
configuration changes.
In this lab, you add mirrors and logs to
existing volumes, and change the volume
read policy. You also resize volumes, rename
disk groups, and move data between systems.

For Lab Exercises, see Appendix A.


[ For Lab Solutions, see Appendix B.

Labs and solutions

fur this lesson are located on the following pages:


"Lilt>:': f\hklllg

Appendix A provides complete lab instructions.


('on jl.~Ul ~t!ion changc-, pilgC :,\<2(j

Basic

l!

Appendix

B provides complete lab instructions

Mak iru; I$:'"IC

('unfi"ULlli()l1

and solutions. "1.;iI) :; Soluuons

C'hallf!l'';,'' pdgC n-I"!

VERITAS Storage Foundation 5,0 for UNIX: Fundamentals

5-26
CuPyrlght:::

7000

Svmantec

COfP(;f;;j(I(JI)

All rlyht5

reserved

Lesson 6
Administering File Systems

symantec

Lesson Introduction

Lesson 1: Virtual Objects

Lesson 2: Installation

Lesson 3: Creating a Volume and File


System

and Interfaces

Lesson 4: Selecting

Lesson 5: Making Basic Configuration


Changes

Volume Layouts

Lesson 6: Adm;nister;nflEil~~y~~e,!,~'"

Lesson 7: Resolving
Problems

Hardware

'~~r:_~~~

~~%.,.#k.,

symautcc

Lesson Topics and Objectives


After completing
be able to:

Topic

this lesson, you will

Topic 1: Comparing the


Allocation Policies of VxFS
and Traditional File Systems

Describe the benefits VxFS extent-based


allocation over traditional block-based
allocation.

Topic 2: Using VERITASFile


System Commands

Apply the appropriate VxFS commands


from the command line.

Topic 3: Controlling File


System Fragmentation

Defragment a VxFS file system.

Topic 4: Logging in VxFS

Perform logging in VxFS by using the


intent log and the file change log.

VERITAS

6-2
Copyright

I~;2006

Storage Foundation

Svmantec Corporauon. All

rl~l'lh,'

.erved

5.0 for UNIX. Fundamentals

S)11!antC<':
.

Traditional

Block-Based

Allocation

Block-based allocation:
Allocates space to the next
rotationally adjacent block

n+8
n+13
n+20
n+21

Allocates blocks at random


from a free block map
Becomes less effective
as the file system fills
Requires extra disk 110
to write metadata

Comparing the Allocation


Systems

Policies of VxFS and Traditional File

Both VxFS and traditional UNIX file systems, such as UFS. use index tables to
store information and location information about blocks used for tiles. However.
VxFS allocation is extent-based, while other file systems are block-based.
Block-based allocation: File systems that
disk space to a file one block at a time.

US!!

block-based allocation assign

Extent-based allocation: File systems that use extent-based allocation assign


disk space in groups of contiguous blocks, called extents.
Example: UFS Block-Based

Allocation

UFS allocates space for files one block at a time. When allocating space to a tile.
UFS uses the next rotationally adjacent block until the file is stored.
UFS can perform at a level similar to an extent-based file system on scquential l/O
by using a technique called block clustering. In UFS, the maxcontig file system
tunable parameter can be used to cluster reads and writes together into groups of
multiple blocks. Through block clustering. writes are delayed so that several small
writes are processed as one large write. Sequential read requests can be processed
as one large read through read-ahead techniques.
Block-based allocation requires extra disk 1/0 to write file system block structure
information, or metadata. Metadata is always written synchronously to disk. which
can significantly slow overall file system performance. Over time, block-based
allocation produces a fragmented file system with random file access.

Lesson

6 Administering

File Systems
Copyright ,r;:,2006 Symantec Corporation. All rights reservC(1

6-3

symantec

VxFS Extent-Based Allocation


Address-length
pair consists of:
Starting block
Length of extent
n

n+1 n+2 n+3 n+4 n+5 n+6 n+7 n+8

n+9 n+10 n+11 n+12 n+13n+14 n+1 n+16 n+17

n+37 n+38 n+39

Extent size is based on


the size of I/O write
requests.
When a file expands,
another extent is allocated.

n+4 n+41 n+42

Extent: A set
of contiguous
blocks

Additional extents are


progressively larger, reducing
the total number of extents
used by a file.

VxFS Extent-Based Allocation


VERITAS File System selects a contiguous range of tile system blocks. called an
extent. for inclusion in a tile. The number of blocks in an extent varies and is based
on either the I/O pattern of the application. or explicit requests by the user or
programmer.
the underlying

Extent-based allocation

enables larger I/O operations to be passed to

drivers.

VxFS attempts to allocate each tile in one extent of blocks. If this is not possible.
VxFS attempts to allocate all extents for a tile close to each other.
Each file is associated with an index block. called an inode. In an inode, an extent
is represented as an address-length pair, which identifies the starting block address
and the length of the extent in logical blocks. This enables the tile system to
directly

access any block of the file.

VxFS automatically selects an extent size by using a default allocation policy that
is based on the size of I/O write requests. The default allocation policy attempts to
balance two goals:
Optimum
Minimal

1/0 performance

through large allocations

tile system fragmentation

through allocation

from space available in

the file system that best fits the data


The first extent allocated is large enough for the first write to the file, Typically, the
first extent is the smallest power of 2 that is larger than the size of the first write.
with a minimum extent allocation of 8K. Additional extents are progressively
larger. doubling the size of the file with each new extent. This method reduces the
total number of extents used by a single tile.

6-4

VERITAS
C()Py'l~lh

Storage Foundation

200f) Symamec ccrpo-auco

All ngtlls reserved

5.0 for UNIX: Fundamentals

Using VxFS Commands


VxFS can be used as the basis for any file system except for
file systems used to boot the system.

Specify directories in the PATH environment variable to


access VxFS-specific commands.

VxFS uses standard file system management syntax:


command [fs type]
[-0 VxFS_optiollS]

[generic
opt.iOIlS]
[specialTmount_poillr]

Use the file system switchout to access VxFS-specific


versions of standard commands.
Without the file system switchout. the file system type is
taken from the default specified in the default file system file.
To use VxFS as your default. change this file to contain vxfs.

Solaris

HP-UX

A1X

I Linu~1

Using VERITAS File System Commands


You can generally use VERITAS File System (Vxf'S) as an alternative to other
disk-based, OS-specilic file systems, except for the lile systems used to boot the
system. File systems used to boot the system are mounted read-only in the boot
process, before the VxFS driver is loaded.
VxFS can be used in place of:
UNIX File System (UFS) on Solaris, except for root, /usr, /var,
Hierarchical File System (IIFS) on IIP-UX, except for /stand .

and /opt.

Journaled File System (.IFS) and Enhanced Journalcd File System (JFS1) on
AJX, except for root and /usr.
Extended File System Version 2 (EXT1) and Version 3 (EXT3) on Linux,
except for root, /boot,
/ e t c, /1 .i b, /var, and /usr.
Location of VxFS Commands
Platform

Location of VxFS Commands

Solaris

/opt/VRTSvxfs/sbin,/usr/lib/fs/vxfs,/etc/fs/vxfs.

HP-UX

/opt/VRTS/bi~/sbin/fs,/usr/lbin/fs

AIX

/opt/VRTSvxfs/sbin,/usr/lib/fs/vxfs./etc/fs/vxfs

Linux

/sbin./usr/lib/fs/vxfs

'opl/VRTS/bin

Specify these directories in the

Lesson 6 Administering

PATH

environment variable.

6-5

File Systems
Cop ynqht

i: 2006

Syrnantec.

Corporation.

All rights

reserved

General File System Command Syntax


To access VxFS-specific versions, or wrappers, of standard commands, you use
the Virtual File System switchout mechanism followed by the file system type,
vxfs. The switchout mechanism directs the system to search the appropriate
directories for Vxf Svspccific versions of commands.
Platform

File System Switchour

Soluris

-F

vxfs

HI'-UX

-F

vxfs

AIX

-v vxfs

l.inux

-t

(or-v

vxf s when used with er

s)

vxfs

Using VxFS Commands by Default


If you do not use the switchout mechanism, then the file system type is taken from
the default specified in the OS-specitic default file system file. If you want
VERITAS File System to be your default tile system type, then you change the
default tile system tile to contain vxfs.
Platform

Defuu It File System Fite

Solaris

/ete/default/fs

HI'-UX

/ete/default/fs

AIX

/ete/vfs

Linux

/ete/default/fs

1--

6-6

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyuqltt ;, 20(l6 Svmantec Corporation

All fights reserved

'synlilntt?c.

VxFS-Specific mkf s Options


mkfs

-0

[fs_type]

[-0

specific_options]

special

bsize_n
Sets logical block size
Default: 1024 bytes (lK) for most
Cannot be changed after creation
In most cases, the default is best.
Resizing the file system does not
change the block size.

-0

Provides information only


Does not create the file
system
largefilesl
nolargefiles
Supports files >= 2 gigabytes
(or >= 8 million files)
Default: largefiles

-0

version-n
Specifies layout version
Valid values are 4, 5, 6,
and 7.
Default: Version 7

-0

logsize_n
Sets size of logging area
Default depends on file system size.
Default is sufficient for most
workloads.
Log size can be changed after
creation using fsadm.

-0

Using mkfs Command Options


YOLI can set a variety of file system properties when you create a V[RIT!\S
system by adding VxFS-spccific options to the mkfs command.

file

I
Lesson

6 Administering

6-7

File Systems
Copyright if; 2006 Svmantec Corporauon. All

rights

reserved

symantec

Other VxFS Commands


Mount options:
mount

-r

mount

-v

mount

-p

mount

-a

Mounts as read only


Displays mounted file systems
Displays in file system table format
(Not on Linux)
Mounts all in file system table

...

Unmount options:
umount

Imydata

umount

-a

umount

-0

Unmounts a file system


Unmounts all mounted file systems

force

Forces an unmount

Irnydata

Display file system type:


fstyp

-v

Idev/vx/dsk/datadg/datavol

Display free space:


df

-F vxfs

Identifying

/mydata

File System Type

If you do not know the tile system type of a particular tile system. you can
determine the tile system type by using the fstyp command. You can use the
fstyp command to describe either a mounted or unmounted tile system.
In YEA, right-dick a tile system in the object tree, and select Properties. The tile
system type is displayed in the File System Properties window.
Identifying

Free Space

To report the number of free disk blocks and inodcs fur a YxFS File System, you
use the d f command. The d f command displays the number of free blocks and
free inodes in a tile system or directory by examining thc counts kept in the
superblocks. Extents smaller than XK may not be usable for all types of allocation,
so the df command docs not count tree blocks in extents below 8K when reporting
the total number of free blocks.
In YEA. right-click
usage information.

a file system. and select Properties to display tree space and

6-8

VERITAS
COpy light

't: 2006

Svmautnc

Storage
Corporation All

Foundation
rights

reserved

5.0 for UNIX: Fundamentals

,S)ll1<lntt'l

~~

..

Fragmentation
Degree of fragmentation depends on:
File system usage
File system activity patterns

a...

I Initial

I ~~

Allocation

I Defragmented I

..0

II

... _

0 00

II

--.080.--.

00..
00000

..000
00000

Fragmentation types:
Directory
Extent

Controlling

File System Fragmentation

In a VERITAS file system, when free resources are initially allocated to files, they
are aligned in the most efficient order possible to provide optimal performance. On
an active file system, the original order is lost over time as files are created,
removed, and resized. As space is allocated and deallocated from f les, the
available free space becomes broken up into fragments. This means that space has
to be assigned to files in smaller and smaller extents. This process is known as
fragmentation. Fragmentation leads to degraded performance and availability.
VxFS provides online reporting and optimization utilities to enable you to monitor
and defragment a mounted tile system. These utilities are accessible through the
file system administration command, f sadm.
Types of Fragmentation
VxFS addresses two types of fragmentation:
Directory fragmentation: As files are created and removed, gaps are left in
directory inodes. This is known as directory fragmentation. Directory
fragmentation causes directory lookups to become slower.
Extent fragmentation: As files are created and removed, the free extent map
for an allocation unit changes from having one large tree area to having many
smaller free areas. Extent fragmentation occurs when files cannot be allocated
in contiguous chunks and more extents must be referenced to access a file. III a
case of extreme fragmentation, a file system may have free space, none or
which can be allocated.

Lesson

6 Administering

6-9

Fife Systems
Copyligh!

<S.2006

Syrnantec

Corporation.

All nghts

reserved

Monitoring

Fragmentation

To monitor

fsadm

directory

Dirs

total

fragmentation:

-D frontl
Total

Searched Blocks
486
99

Immed Immeds

Dirs

Dirs

Reduce

to

388

Add

to

Blocks
to

Reduce

A high total in the Di rs to Reduce


column indicates fragmentation.
To monitor

fsadm

in VEA, select File Syslem->


Properties->Statistics

extent fragmentation:

-E fhome

% Free blocks

in

extents

smaller

than

64 blks:

8.35

% Free

blocks

in

extents

smaller

than

8 blks:

4.16

% blks

allocated

larger:

45.81

to

extents

64 blks

or

Output displays percentages of free


and allocated blocks per extent size.

Running Fragmentation

Reports

You can monitor fragmentation in a VERITAS file system by running reports that
describe fragmentation levels, You use the f sadm command to run reports on
both directory and extent fragmentation. The d f command, which reports on tile
system tree space. also provides information

Interpreting

Fragmentation

useful in monitoring

fragmentation.

Reports

In general. for optimum performance. the percentage of tree space in a tile system
should not fall below 10 percent. A tile system with 10 percent or more free space
has less fragmentation
A badly fragmented
characteristics:

and better extent allocation.


file system will have one or more of the following

Greater than 5 percent of tree space in extents of less than 8 blocks in length
More than 50 percent of tree space in extents of less than 64 blocks in length
Less than 5 percent of the total tile system size available as free extents ill
lengths of 64 or more blocks

6-10

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


CopYright :;2006

S}'ma!HC

Corporation

All

rights

reserved

symaruec

(';I1I'.1##J;
.....'-'.

..
~

Defragmenting a File System


fsadm

I-d]

I-D]

I-e]

I-E]

I-t

During extent reorganization:

Small files are made contiguous.


Large files are built from large
extents.
Small, recent files are moved
near the Inodes.
Large, old files are moved to the
end of the allocation unit.
Free space Is clustered In the
center of the allocation unit.

Example:
fsadm

-e

passes]

I-p

time]

During directory

mount point

reorganization:

Valid entries are moved to the


front.
Free space is clustered in the
center of the allocation unit.
Directories are packed Into Inode
area.
Directories are placed before
other files.
Entries are sorted by access time.

Example:
-E

-s

/mntl

fsadm

-d

-D

/mntl

In VEA, highlight a file system, and select Actions->Defrag

File System,

VxFS Defragmentation
You can use the oniinc administration utility fsadm to defragment. or reorganize.
file system directories and extents. The f sadm utility detragmcnts a file system
mounted for read/write access by:
Removing unused space from directories
Making all small files contiguous
Consolidating free blocks for file system use
Only a privileged user can reorganize a file system.

Defragmenting Extents
Entries are sorted by the time of last access.
Other f sadm Defragmentation Options
If you specify both -d and -e. directory reorganization is always completed
before extent reorganization.
I I' you use the - D and - E with the - d and - e options. fragmentation reports are
produced both before and after the reorganization.
You can use the - t and - p options to control the amount of work performed by
f sadm. either in a speci ficd time or by a number of passes. By default. f sadm
runs five passes. I f both - t and - p are specified. f sadm exits if either of the
terminating conditions is reached.

Lesson 6 Administering

6-11

File Systems
Copynqtn,; 2006 Svrnantcc ccrro.auoo

All rights reserved

symantec

Scheduling Defragmentation
The frequency of defragmentation depends on
usage, activity patterns, and the importance of
performance.
Run defragmentation
-

Daily or weekly

Monthly

on demand or as a cron job:

for frequently

for infrequently

Adjust defragmentation

used file systems

used file systems

intervals based on reports.

To defragment using VEA, highlight a file system


and select Actions->Defrag
File System.

Scheduling Defragmentation
The best way to ensure that fragmentation does not become a problem is to
defragment the file system on a regular basis. The frequency of dcfragmeruation
depends on file system usage. activity patterns. and the importance of file system
performance. In general, follow these guidelines:
Schedule dctragmeruarion

during a time when the file system is relatively idle.

For frequently used tile systems. you should schedule dctrugmentauon daily or
weekly.
For infrequently used file systems. you should schedule dcfragmenration at
least monthly,
Full tile systems tend to fragment and are difficult
consider expanding the tile system.

to defragment. You should

To determine the dcfragmcniation schedule that is best for your system, select
what you think is an appropriate interval for running extent reorganization and run
the fragmentation reports both before and after the reorganization, If the degreeof
tragmcruation is approaching the bad fragmentation figures. then the interval
between fsadm runs should be reduced. If the degree offragmentation is low, then
the interval between f sadm runs can be increased.
You should schedule directory reorganization for file systems when the extent
reorganization is scheduled. The fsadm utility call run on demand and can be
scheduled regularly as a cronjob.
The dcfragmcutation process can take some time. You receive an alert when the
process is complete.

6-12

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

'synmn1t'C

.dl1

Testing Performance Using vxbench


vxbench~latform

Example:

-w

Sequential

0.560

Bee

[options)

file

name

write

vxbench_placform
/mnt/testfileOl
total:

workload

-w write
14623.53

-i

KB/s

iosize-8,iocount-131072
cpu:

0.10

sys

0.01

user

The output disptays etapsed time in seconds, throughput


in
KB/second, and CPU time for the system and the user in seconds.

Example:

Random

write

vxbench_platform
-w rand_write
iocount=131072,maxfilesize=1048576
/mnt/testfileOl
Note: Separate suboptions

-i iosize=8,
\

using commas with no spaces.

Benchmarking Using vxbench


What Is Benchmarking'!
Benchmarking is a testing technique that enables you to measure performance
based on a set of standards. or benchmarks. You can use benchmarking techniques
to try to predict the performance of a new file system configuration or to analyze
the performance of an existing tile system.
What Is vxbench'?
VERITAS engineering developed a benchmarking tool called vxbench that
enables you to create different combinations of I/O workloads.
The vxbench program is installed as part of the VRTSspt software installation
and exists under the / opt/VRTSspt/ FS/VxBench directory.
Notes on Testing Performance
The vxbench program applies a workload to a file system and measures
performance based on how long file system operations take. I I' anything else is
using the tile system at the same time. then the vxbench performance reports
are affected.
For sequential workloads: iosize x iocount = size of the file.
The ios ize and maxf ilesi ze parameters are defined in units of I K:
therefore. .i os i ze s defines a size of8K.
e

Lesson 6 Administering

File Systems
Copynght

If' 2006

6-13
Symantec

Corporation

All rights

reserved

The vxbenchyl

a t form Command

In the syntax, you specify the command followed by a type of workload. Valid
workloads are:
Performs a sequential read of the test tiles

read

Performs a sequential write of the test tiles

write
read

Performs a random read of the test files

rand

write

Performs a random write of the test files

rand

mixed

Performs a mix ofrandom reads and writes

rand

mmap read

Uses mmapto read the test tiles

mmap_write

Uses mmapto overwrite the test tiles

After specifying the type of workload. you can add specific options that
characterize the test that you want to perform.
Finally. you specify the name of the tile on which to run the test. If you specify
multiple filenames, vxbench_platform
runs tests in parallel to each tile,
which simulates multiple simultaneous users. If you use the option that specifies
multiple threads. then each simulated user runs multiple threads. The total number
of 110 threads is the number of users multiplied by the number of threads.
Command Options
By adding options to the vxbench _pl a tform command. you can simulate a
wide variety of I/O environments. The following table describes some of these
options and their uses. You can display a complete list ofvxbenchylatform
command options by typing vxbenchylatform
-h.
Option

Use

-h

Prints a detailed help message

-p

Uses processes for users and uses threads for multithreaded


(This is the default option.)

-p

Uses processes lor users and for multithreaded

-t

Uses threads for users and lor multithreaded

110

-m

Locks 1:'0 butlers in memory


For multiuser tests. only prints summary results

-v

For multithreaded

-k

Prints throughput in kilobytes/second

tests. prints per-thread results


(This is the default option.)

Print, throughput in megabytes/second

-M

[suboptions]

vxbench

6-14

I/O

-s

-i

1,'0

SPCCitlCSsuboprions

describing the test you want to perform

is included in the VRTSspt package.

VERITAS Storage Foundetion 5.0 for UNIX: Fundamentals


Copyright ~ 2006

SYI11~-I!l!eC

Corporation

All rights reserved

---- -'~)~;~~mlt:;
The intent log records
pending file system
changes before metadata
is changed.
Structural
Files

Intent Lo

If the system
crashes, the
intent log is
replayed by
VxFS fsck.

Logging in VxFS
Role of the Intent Log
A file system may be left in an inconsistent state after a system failure. Recovery
of structural consistency requires examination of file system metadara
structures.VERITAS File System provides fast file system recovery after a system
failure by using a tracking feature called ill/em /oggillg or journaling. Intent
logging is the process by which intended changes to file system metadata are
written to a log before changes are made to the file system structure. Once the
intent log has been written. the other updates to the tile system can be written in
any order. In the event of a system failure. the VxFS f sck utility replays the intent
log to nullify or complete file system operations that were active when the system
failed.
Traditionally. the length oftime taken for recovery using fsck was proportional to
the size of the file system. For large disk configurations. running f sck is a timeconsuming process that checks. verities. and corrects the entire file system.
The VxFS version of the fsck utility performs an intent log replay to recover a
file system without completing a full structural check of the entire tile system. The
time required for log replay is proportional to the log size. not the file system size.
Therefore. the file system can be recovered and mounted seconds after a system
failure. Intent log recovery is not readily apparent to users or administrators. and
the intent log can be replayed multiple times with no adverse effects.
Note: Replaying the intent log may not completely recover the damaged file
system structure if the disk suffers a hardware failure. Such situations may require
a complete system check using the VxFS fsck utility.
Lesson 6 Administering File Systems
Copyright

~i;:2006 Symarrtec

6-15
Corporation

All nqtus

reserved

svmarucc

Maintaining

VxFS Consistency

To check file system consistency by using the intent log for


the VxFS on the volume datavol:
fsck [fs_type]
/dev/vx/rdsk/datadg/datavol
To perform a full check without using the intent log:
fsck [fs_type]
-0 full,nolog
/dev/vx/rdsk/datadg/datavol

To check two file systems in parallel using the intent log:


fsck [fs_type]
-0 p /dev/rdsk/clt2dOs4
/dev/rdsk/cltOdOs5

To perform a file system check using the VEA GUI, highlight an


unmounted file system, and select Actions->Check

Maintaining

File System.

File System Consistency

You use the YxFS-specitic version of the fsck command to check the consistency
of and repair a Y xFS file system. The f sck utility replays the intent log by
default, instead of performing a full structural tile system check. which is usually
sufficient to set the tile system state to CLEAN. You can also use the f sck utility to
perform a full structural recovery in the unl ikcly event that the log is unusable.
The syntax for the f sck command is:
fsck

[fs_type]

full,nolog]

[-0

[genenc

opUons]

[-yl-Y]

[-nl-N]

special

For a complete list of generic options. see the


the generic options include:

f sck (1m)

manual page. Some of

Option

Description

-m

Checks. but docs not repair. a tile system before mounting

-niN

Assumes a responseof no to all prompts by fsck (This option


does not replay the intent log and performs a full Isck.)

-v

Echoes the expanded command line but does not execute the
command

-yly

Assumes a responseof yes to all prompts by fsck (I r the lile


system requires a lull tsck after the log replay. then a full tsck is
performed.j

-0

p can only be run with log f sck. not with full f sck.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

6-16
Copyright::

211Ot; Syroante..

Comoraunn

All

notus

rvserveo

syrnanrcc.

Resizing the Intent Log

Intent log size can be changed


fsadm
[-F vxfsl
0 logaic

mount_point

using fsadm:
size
[,logvol_vol_namel

Specify a

Place log on a

naw log alze.

separate device.

Use the fsadm


-L mount po i nt: command
information on the current intent log

Larger log sizes may improve performance


synchronous
writes, but
--.-...
may increase:
Highlight a file
- Recovery time
Actions->Set
- Memory requirements
File system
- Log maintenance time
Intent log see:

to get detailed
for intensive
.

..

system, and select


Intent Log Options.

"

Jmnt!

I Not

on HP-UX

Resizing the Intent Log


The VxFS intent log is allocated when the file system is first created. The size of
the intent log is based on the size of the file system-the
larger the file system, the
larger the intent log.
Default

log size: Based on file system size: in the range of256K

Default maximum
4 and 5 layout)

to 64 MB

log size: 64 !'vIB (Version 6 and 7 layout): 16 MB (Versions

With the Version 6 disk layout. you can dynamically increase or decrease the intent
lug size using the log option of the f sadmcommand. The allocation can be
directed to a specified intent logging device, as long as the device exists and
belongs to the same volume set as the file system.
Increasing the size of the intent log (an improve system performance because it
reduces the number of times the log wraps around. However, increasing the intent
log size can lead to greater times required for a log replay if there is a system
failure. /\ large log provides better performance on metadata-intensive workloads.
Memory requirements for log maintenance increase as the log size grows. The log
size should not be more than 50 percent of the physical memory size otthc system.
/\ small log uses less space on the disk and leaves more room for file data. For
example. selling a log size smaller than the default log size may be appropriate for
a small floppy device. On small systems, you should ensure that the log size is not
greater than half the available swap space.
Note: The Loqvo I. option 10 place the intent log on a separate volume can only
be used with multi-volume file systems (file systems on volume sets).

6-17

Lesson 6 Administering File Systems


Copyright

~ 2006

Svmantec

Corporation

All nqhts

reserveo

symantec

Logging mount Options


mount

-F

vxfs

[-0

specific_options]
Most logging delayed; great
performance improvement,
but changes could be lost

All structural
changes logged
-0

I Integrity

-0

log

tmplog
Performance

-0

delaylog

Default; some logging


delayed; improves
performance

Controlling Logging Behavior


VERITAS File System provides VxFS-spccific logging options that you can use
when mounting a file system to alter default logging behavior. By default, when
you mount a VERITAS file system, the -0 delaylog option is used with the
mount command. With this option, some system calls return before the intent log
is written. This logging delay improves the performance of the system, and this
mode approximates traditional UNIX guarantees for correctness in case of system
failures. You call specify other mount options to change logging behavior to
further improve performance at the expense of reliabi lity.
Selecting mount Options for Logging
You can add VxFS-specific
- 0 in the syntax:
mount
special

mount options to the standard mount command using

vxfs]
[generic_options]
mount_point

[-0 specific_options]

[-F

Logging mount options include:

-0

log

-0

delaylog

-0

tmplog

6-18

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Cr;[JYrlgt1!

,0

:!006

SYJT1W'lec

co-ooreuoo

All r!gh!<;

reserved

symanrec.

Logging and Performance


To select the best logging mode for your environment:
Understand the different logging options.
Test sample loads and compare performance
Consider the type of operations
the workload.

performed

results.

in addition to

Performance of 1/0 to devices can improve if writes are


performed in a particular size, or in a multiple of that size.
To specify an 1/0 size to be used for logging, use the
mount option:
-0

logiosize=size

Place the intent log on a separate volume and disk.

Logging and VxFS Performance


In environments where data reliability and integrity is of the highest importance.
logging is essential. However, logging does incur performance overhead. If
maximum data reliability is less important than maximum performance. then you
can experiment with logging mount options. When selecting mount options for
logging to try to improve performance. follow these guidelines:

I
Lesson 6 Administering

File Systems
Copyright If, 2006 Symaruec Corporation

6-19
All nqnts, reserved

syrnantec

File Change Log


Tracks changes to files and directories in a file system for use
by backup utilities, webcrawlers, search engines, and
replication programs.
In contrast to the intent log, the FCL is not used for recovery.
Location: mount_point/lost+found/changelog
To activateldeactivate an FCL for a mounted file system:
fcladm on I off mount_pOint
(Default is off.)
To remove an FCL (FCL must be off first):
fcladm rm mount_point
To obtain the current FCL state for a mounted file system:
fcladm state mount_point
To print the file change log:
fcladm

print

Olprint

x mount_point

To translate the log entries in inodes to full paths:


vxlsino

inode

number

mount_point

File Change Log


The VxFS File Change Log (FCL) is another type of log that tracks changes to
tiles and directories in a tile system. Applications that can make use ofthe FCL arc
those that are typically required to scan an entire tile system to discover changes
since the last scan, such as backup utilities, wcbcrawlers, search engines, and
replication programs. The File Change Log records tile system changes such as
creates, links, unlinks, renaming. data appended, data overwritten, data truncated,
extended attribute modifications, holes punched, and other tile property updates.
Note: The FCL records only that data has changed, not the actual data. It is the
responsibility of the application to examine the tiles that have changed data to
determine which data has changed.
FCL. stores changes in a sparse file in the tile system namespace. The FCL log tile
is always located in mount_point/lost+found/changelog.
Comparing

the Intent Log and the File Change Log

The intent log is used to speed recovery of the tile system after a crash. The FCL
has no such role. Instead. the FCL is used to improve the performance of
applications. For example: your IT department mandates that all systems undergo
a virus scan once a week. The virus scan takes some time and your system takes a
performance hit during the scan. To improve this situation, an FCL. could be used
with the virus scanner. The virus scanner. if using an FCL, could read the log, find
all tiles on your system that arc either new or that have been modified, and scan
only those files.
FCL is used with NctBackup to greatly improve the speed of incremental backups.

6-20

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Cup,

IIgl11

21)06

Svmantec

CorPOIi:lIIOIl

All rights

reserveo

Lesson

symaruec

Summary

Key Points
This lesson describes how to administer file
systems using VERITAS File System (VxFS). You
learned how to defragment a file system and use
the logging capabilities in VxFS.
Reference Materials
- VERITAS File System Administrator's

Guide

- VERITAS Volume Manager Administrator's

Guide

'symalltec

Lab 6: Administering

File Systems

In this lab, you practice file system


administration, including defragmentation
and administering the file change log.

L~

Lab Exercises,
Lab Solutions,

see Appendix
see Appendix

A.
B.

Labs and solutions for this lesson arc located on the following pages:
Appendix A provides complete lab instructions. "l.ab (,: :\dlllilli:':lcril1)! File>
SY'ilCIli'." I':;:,:C /\-.17

Appendix B provides complete lab instructions and solutions. "lab (, Solulion,:

Lesson

,\dllllllislCring

Fik S\,knl"c."

6 Administering

File Systems

j1ag~ B-(,7

Copyright ~ 2006 Symanter. Corporation. All rights reserved

6-21

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

6-22
Copyright

':

20()6

Svmantec Corporauon.

All f191"1\S re<;ervN\

Lesson 7
Resolving Hardware Problems

syrnantcc.

Lesson Introduction

Lesson 1: Virtual Objects

Lesson 2: Installation

Lesson 3: Creating a Volume and File


System

Lesson 4: Selecting

Lesson 5: Making Basic Configuration


Changes

Lesson 6: Administering

and Interfaces

Volume Layouts

File Systems

Lesson 7: Res~/ving_,!~rdware Proble~~~

svrnantec

Lesson Topics and Objectives


Topic
..

7-2

After completing this lesson,


you will be able to:

Topic 1: How Does VxVM


Interpret Failures in Hardware

Interpret failures in hardware.

Topic 2: Recovering Disabled


Disk Groups

Recover disabled disk groups.

Topic 3:Resolving Disk


Failures

Resolve disk failures.

Topic 4: Managing Hot


Relocation at the Host Level

Manage hot relocation at the host


level.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynytd ;. 2006 Svmantec Coruorauon

All fights reservoo

Potential Failures in a Storage Environment

I Temporary Failures I

Disk
Arrays

m'.",''
j ~

"
"

r: A (i, f~
/'
~

(+~(~ (j
JBOD

Power cut
Fibre connection
failure
Complete SAN failure
SAN switch failure
HBA card/port failure

i :-LUNlDisk fail~;;--"l

C~mplete disk array


failure
Site Failure

,_ _...1 ,
: Can be Permanent
L~ Temporary _

How Does VxVM Interpret Failures in Hardware


YxYM interprets failures in hardware in a variety of ways. depending on the type
of failure.

I
Lesson 7 ResolvingHardwareProblems
Copyngtl! ,~~2006 Symanter; Corporation

7-3
All

nqbts reserved

syrnarucc

1/0 Error Handling

If the LUN/disk cannot be accessed at all, dynamic multipathing (DMP)


disables the path. If there is only one path, the DMP node is disabled.

Identifying 1/0 Failure


Disk Failure
Data availability and reliability are ensured through most failures if you are using
VxVM redundancy features, such as mirroring or RAID-5. If the volume layout is
not redundant. loss of a drive may result in loss of data and may require recovery
from backup. For 1/0 failure on a nonrcdundant volume. VxVM reports the error.
bUI it docs not take any further action.
Disk Failure Handling
When a drive becomes unavailable during an I/O operation or experiences
uncorrcctablc I/O errors, the operating system detects SCSI failures and reports
them 10 VxVM. The method that VxVM uses 10 process the SCSI failure depends
on whether the failure occurs on a nonrcdundant or a redundant volume.
FAILING

vs. FAILED

Disks

Volume Manager differentiates

between FAILING and FAILED drives:

Ifthere arc uncorrcctable 1/0 failures on the public region of the


drive. bUI VxVM can still access the private region of the drive. the disk is
marked as FAILING.
FAILING:

FAILED: IfVxVM
cannot access the private region or the public region. the
disk is marked as FAILED.

The condition
course.

7-4

flags and object slates arc described in detail in the Maintenance

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyllght . 2006 Symaotec Corporauor All rlgt1ls rcsorveo

,S)11mmC(_

Identifying Disabled Disk Groups


VxVM disk and disk group

records

before the failure:

r;:;-xdlsk-iist-------------------------------------------------iDEVICE

TYPE
auto:cdsdisk
auto:cdsdisk
auto:none

IdiSkO_l
l dd ak O-- 2

!diskO 3
list
ivxdg
!NAME

IdiSkO 1
idiskO - 2
idiskO=3
:-;xdg
list
!NAME
Idatadg

GROUP
datadg
datadg

STATE

jdatadg

jvxdisk
IDEVICE

DISK
datadgOl
datadg02

STATUS
online
online
online

invalid

ID
11S0193039.S8_trainl

enabled. cds

list
TYPE

DISK

GROUP

STATUS

auto: cdsdisk

datadgOl

auto:cdsdisk

datadg02

datadg
datadg

online
online

auto

dgdisabled
dgdisabled

error

STATE
disabled

ID
11S0193039.S8.trainl

Identifying Disabled Disk Groups


When disk groups are disabled. the statuschangesto dqd i s ab I ed.

I
Lesson

7 Resolving Hardware Problems


Copyngh!

(~';2nO!)

7-5
Symanter.

Corporanor,

All rights

reserved

syrnantec

Identifying Failed Disks

STATUS
online
online
online

vxdisk

list

DEVICE
diskO - 0
diskO - 1
diskO 2
diskO - 3
diskO 4

TYPE
sliced
auto:cdsdisk
auto
auto
auto

DISK
rootdisk
datadgOl

GROUP
sysdg
datadg

datadg02

datadg

STATUS
online
online
error
error
error
failed
was:diskO

Identifying Failure: Disk Records


When VxVM detaches the disk, it breaks the mapping between the VxVM
disk-s-disk media record (datadg02)-and
the disk drive (diskO
2).
However. information on the disk. media record, such as the disk media name, the
disk group, the volumes, plexcs. and subdisks on the Vx VM disk, and so on, is
maintained in the configuration database in the active private regions of the disk
group.
The output ofvxdisk
VxVM configuration
reset its tables.
To force the VxVM

vxdctl

list
displays the tailed drive as online
until the
daemon is forced to reread all the drives in the system and to

configuration

daemon to reread all the drives in the system:

enable

After you run this command, the drive status changes to error
for the failed
drive, and the disk media record changes to fai led. The disk is immediately
marked as error state, when the public region is not accessible.

7-6

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Ccpvnqht '; 2006 Svmaotr-r Corporation

All nqhts reserved

symamec

Permanent versus Temporary Failures


Temporary Failure
- Data on the LUN/disk is still there, only
temporarily unavailable.
- When the hardware problem is resolved, in most
cases recovery can make use of the pre-existing
data.

Permanent Failure
- The data on the LUN/disk is completely
destroyed.
- If the volumes were not redundant, data needs
to be restored from backup.
- However, the VxVM objects and the disk group
configuration information can be restored.

Disk Failure Types


The basic types of disk failure are permanent and temporary.
Permanent disk failures are failures in which the data on the drive can no
longer be accessed for any reason (that is. uncorrectable). In this cascothe data
on the disk is lost.
disk failures are disk devices that have failures that are repaired
some lime later. This type offailure includes a drive that is powered ofland
back on. or a drive that has a loose SCSI connection that is fixed later. In these
cases. the data is still on the disk. bUI it may not be synchronized with the other
disks being actively used in a volume.

Temporary

I
Lesson

7 Resolving

Hardware

Problems
Copyright

(["':;2006

7-7
Syrnantec

Corporation.

All right"

reserved

syrnantec ,

Device Recovery
As soon as the hardware problem is resolved, the OS
recognizes the disk array and the disks.
DMP automatically detects the change, adds the disk
array to the configuration, and enables the DMP paths.
This may take up to 300 seconds. If you want to make it
faster, you can execute the vxdctl enable command
immediately

after resolving the hardware

problem.

Relevant messages are logged to the system log.


June

13 12':06:25
train1
VxVM vxdmp V-5-0-34
HDS9500-ALUA
June 13 12:06:25
train1
VxVM vxdmp V-5-0-148
dmpnode 253/0x10
June 13 12:06:25
train1
VxVM vxdmp V-5-0-147

Recovering

vxdmp:
[10 803759 kern.notice]
NOTICB:
added disk
array
D60JODDA. datype
=
vxdmp:
enabled

[10 736771 kern.notice]


path 32/0xaO belonging

NOTICE:
to the

vxdmp:
enabled

[10 899070 kern.notice]


NOTICB,
dmpnode 253/0x10
Solari. Example

Disabled Disk Groups

Device Recovery
As soon as the hardware problem is resolved. the OS recognizes the disk array and
the disks, DMP automatically detects the change. adds the disk array to the
configuration. and enables the DMP paths,
Relevant messages arc logged to the system log.

7-8

VERITAS

Storage

CopYrlght~' 2006 Svmantec Corporation

Foundation
All flgnl" reserved

5.0 for UNIX: Fundamentals

Recovering From Temporary Disk Group Failures


The disks still have their private regions. Therefore,
there is no need to recover the disk group
configuration data.
Recover the disk group as follows:
1.
2.

3.
4.
5.

6.
7.

Unmount any disabled file systems in the disk group.


Deport the disk group.
Make sure that the DMP paths are enabled using:
vxdisk
-0 alldgs
list
Import the disk group.
Start the volumes in the disk group using:
vxvol
-g diskgroup
startall
Note that mirrored volumes may go through a
synchronization
process at the background if they were
open at the time of the failure.
Carry out file system checks.
Mount the file systems.

Recovering From Temporary Disk Group Failures


The disks still have their private regions. so there is no need to recover the disk
group configuration data.
Recover the disk group as described in the slide.

I
Lesson 7 Resolving

Hardware

Problems
Copyrigtll 'G 2006 Symaetec Corporation. All rights reserved

7-9

Recovering

From Permanent Disk Group Failures

DMP recovery is again automatically done as in


temporary failures. However, this time the disks do
not have any private region that has the disk group
configuration data.
After the DMP paths are enabled, recover the disk
group as follows:
1.
2.

3.

4.
5.

Unmount any disabled file systems in the disk group.


Deport the disk group.
At this point all disk group information is lost except
for the configuration backups.
Restore the disk group configuration data.
Note that mirrored volumes will go through a
synchronization
process at the background.
Re-create the file systems if necessary.
Restore data from a backup.

Recovering From Permanent Disk Group Failures


DMP recovery is again automatically done as in temporary failures. However. this
time the disks do not have any private region that has the disk group configuration
data.
After the DMP paths are enabled. recover the disk group as described in the slide.

7-10

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYlIgh!' 2006 Svmantec Corporauoo A.II nqhts isorveo

symamec .

Disk Group Configuration Backup and Restore


engdg

engdg

Back Up

Precommit

I
vxconfigbackup

Jiskgroup
vxconfigrestore

1
-p

diskgroup

Commit

-c diskgroup

Protecting the VxVM Configuration


The disk group configuration backup and restoration feature enables you to back
up and restore all configuration data for disk groups, and for volumes that are
configured within the disk groups. The vxconf igbackupd daemon monitors
changes to the YxYM configuration and automatically records any configuration
changes that occur.
The vxconf igbackup utility is provided for backing up and restoring a YxYM
configuration for a disk group.
The vxconf igres tore utility is provided for restoring the configuration. The
restoration process has two stages: precommit and commit. In the precomrnit
stage. you can examine the configuration of the disk group that would be restored
from the backup. The actual disk group con figuration is not permanently restored
until you choose 10 commit the changes.
By default. YxYM configuration data is automatically backed up to the files:

/etc/vx/cbr/bk/diskgroup.dgid/dgid.dginfo
/etc/vx/cbr/bk/diskgroup.dgid/dgid.diskinfo

/etc/vx/cbr/bk/diskgroup.dgid/dgid.binconfig

/etc/vx/cbr/bk/diskgroup.dgid/dgid.cfgrec

Configuration data from a backup enables you 10 reinstall private region headers of
YxYM disks in a disk group, re-create a corrupted disk group configuration, or recreate a disk group and the YxYM objects within it.
This process is handled automatically by the vxconf igbackupd daemon.

Lesson

7 Resolving

Hardware

Problems
Copyright K: 2006 Symauter. Corporation, All rrghts -ese-veo

7-11

symantcc.

Disk Failure: Volume States After the Failure


vxprint

is the failed disk.

: datadg02

-ht

OM NAME
RV NAME
RL NAME

DEVICE
TYPE
RLINK_CNT
KSTATE
RVG
KSTATE
STATE

NLOG

MINORS

GROUP-IO

PRIVLEN
PUBLEN
STATE
STATE
PRIMARY
DATAVOLS
SRL
REM HOST REM __DG
REM RLNK

NAME

RVG

LENGTH

READPOL

PL

NAME

VOLUME

KSTATE

STATE

LENGTH

LAYOUT

NeOL/WID

MODE

so

NAME

FLEX

DISK

DISKOFFS

LENGTH

[COLI]

OFF

DEVICE

MODE

SV

NAME

[COLIl

OFF

AM/NM

MODE

KSTATE

STATE

PREFPLEX

PLEX

VOLNAME

NVQLLAYR

LENGTH

dg datadg

default

default

64000

954250803.2005. train06

dm datadgOl
dm datadg02

diskO 1

volOl

volOl
sd datadgOl-Ol volOl-Ol
p I volOl-02
volOl
sd datadg02-0l
volOl 02

p L volOl-Ol

i v vel02
! pl vel02-0l
sd datadg02-02

datadg
NCONFIG

iv
1

-g

DG NAME

vol02
velO2-0l

auto:cdsdiak

1519

UTYPE

4152640
NODEVICi:

ENABLED

ACTIVE

ENABLED

ACTIVE

datadgOl 0
DISABLED

NODKVICE

datadg02

DISABLED

ACTIV1i

DISABLRD

NODBVICE

datadg02

205200

204800
205200
205200
205200
205200
204800
205200
205200

SELECT

fagen

CONCAT

RW

diskO

ENA

CONCAT

RW

RLOC

SELECT
CONCAT

fsgen
RW
NDliV

Resolving Disk Failures


Volume States After the Failure
As soon as YxYM detects the disk failure, it detaches the disk media record from
the disk access record, the corresponding plex states change to NODEYICE as
shown 011 the slide. At this point, YxYM docs not differentiate between a
permanent failure and a temporary failure.

7-12

VERITAS Storage Foundation 5.0 for UNIX' Fundamentals


Cupyngtll c; 2006 Symantec Cornorauoo All ';'11,,<;-oservec

Disk Replacement Tasks

CDPhys;c.,
Replace

a new disk.

0L09;C.' Repl.cemen,
Replace

disk With

'~m.

volumes.

Resynchronize

corrupt

8~:

the disk in VxVM.

Start disabled

Replacement

redundant

~
volumes.

Sf!!]

Disk Replacement Tasks


Replacing a failed or corrupted disk involves both physically replacing the disk
and then logically replacing the disk and recovering volumes in VxVM:
Disk replacement:When a disk fails, you replace the COITuptdisk with a new
disk. The replacement disk cannot already be in a disk group. I f you want to
use a disk that exists in another disk group, then you must remove the disk
from the disk group before you can use it as the replacement disk.
Volume recovery: When a disk fails and is removed for replacement. the plex
on the failed disk is disabled, until the disk is replaced. Volume recovery
involves starling disabled volumes, resynchronizing mirrors, and
resynchronizing RAID-5 parity.
After successful recovery, the volume is available for use again. Redundant
(mirrored or RAID-5) volumes can be recovered by VxVM. Nonredundant
(unmirrored) volumes must be restored from backup.

I
Lesson

7 Resolving

Hardware

7-13

Problems
Copvnqht

'.i;:;

200(; Svruanter. Corporation.

All rights

reserved

Physically Replacing a Disk


1. Connect the new disk.
2. Ensure that the operating

system recognizes

the disk.

3. Get VxVM to recognize the disk:


vxdctl
enable
4. Verify that VxVM recognizes
vxdisk
-0
alldgs
list

the disk:

Note: In VEA, use Actions->Rescan to run disk setup


commands appropriate for the OS and ensure that VxVM
recognizes newly attached hardware.

Adding a New Disk


1

Connect the new disk.

Get the operating system to recognize the disk:


Ox-Specific Commands to Recognize a Disk

Platform

devfsadm

Soluris

prtvtoc
HP-UX

insf
;\IX

-fC

cfgmgr

Linux

blockdev

-C

-1 device
--rereadpt

name
/dev/xxx

Get YxYM to recognize that a failed disk is now working again:


vxdctl

disk

-e

lsdev

name

/dev/dsk/device_

ioscan

enable

Verify that Yx YM recognizes the disk:


vxdisk

list

After the operating system and YxYM recognize the new disk, you can then use
the disk as a replacement disk.
Note: In YEA. use the Actions=-c-Rcscun option to run disk setup commands
appropriate for the operating system. This option ensures that YxYM recognizes
newly attached hardware.

7-14

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

$L,

symanrec

Logically Replacing a Disk


VEA:

Select the disk to be replaced.

Select Actions->Replace

Disk.

vxdiskadm:
"Replace

a failed

or

removed

disk"

adddisk

disk_name=device_name

CLl:
vxdg

-k

-g

diskgroup

The -k option

forces VxVM to take the disk media name of

the failed disk and assign

it to the new disk. Use with caution.

Example:

vxdg

-k

-g

datadg

adddisk

datadgOl=cltldO

Note: You may need to initialize the disk prior to running the vxdg
adddisk command: vxdisksetup -i device name

Replacing a Disk: VEA


Select:

The disk to be replaced

Navigation path:

Acuons=-c-Replacc Disk

Input:

Select the disk to be used as the new (replacement)

disk.

YxYM replaces the disk and attempts to recover volumes.


Replacing a Failed Disk: vxdiskadm
To replace a disk that has already failed or that has already been removed. you
select the "Replace a failed or removed disk" option. This process creates a public
and private region on the new disk and populates the private region with the disk
media name of the failed disk.
Replacing a Disk: ell
The - k switch forces YxYM to take the disk media name of the failed disk and
assign it to the new disk. For example. if the failed disk datadgOI in the datadg
disk group was removed. and you want to add the new device cl t IdO as the
replacement disk:
vxdg

-k

-g

datadg

adddisk

datadgOl=cltldO

Note: I f the disk failure was temporary. the disk still has the private region that
would enable YxYM to recognize it. In this case you can use the vxreat tach
command instead of the vxdg - k adddi sk command to reattach the failed
disk.
7-15

Lesson 7 Resolving Hardware Problems


Copy!lgl1(

rc' 20U6

Symantec

Corporation

All rights

reserved

svmaruec

Recovering a Volume
VEA:

Select the volume to be recovered.

Select Actions->Recover

CLl:
vxreattaeh

[-ber]

Volume.

[device_tag]

Reattaches disks to a disk group if disk has a transient failure,


such as when a drive is turned off and then turned back on
-r attempts to recover stale plexes using vxreeover.

vxreeover
vxreeover

Recovering

t-s

[-bnpsvV]

diskgroup]

I disk_name]

[volume_name

-b -g datadg

datavol

a Volume

The vxrea t tach Command


The vxrea t tach utility reattaches disks to a disk group and retains the same
media name. This command attempts to find the name of the drive in the private
region and to match it to a disk media record that is missing a disk access record.
This operation may be necessary if a disk has a transient failure-for
example, if a
drive is turned offand then back on. or if the Volume Manager starts with some
disk drivers unloaded and unloadable.
The vxrecover Command
To perform volume recovery operations from the command line, you use the
vxrecover command. The vxrecover program performs plex attach, RAID-S
subdisk recovery, and resynchronizc operations for specified volumes
(volume _name), or tor volumes residing on specified disks (disk_name).
You
can run vxrecover any time to resynchronizc mirrors.
For example, utter replacing the tailed disk datadgOl in the datadg disk group.
and adding the new disk cltldOs2 in its place, you can attempt to recover the
volume datavol:

vxreeover

-bs

-g datadg

datavol

To recover. in the background, any detached subdisks or plexes that resulted trom
replacement of the disk datadgOl in the datadg disk group:
vxrecover

-b

-g datadg

datadgOl

VERITAS Storage Foundation 5.0 for UNIX' Fundamentals

7-16
Cupyflght

'C,

200 SYll'iintec corcoauon. All rights reserved

.~

.;~~~~
~
Resolving Disk Failures - Summary
Permanent

Disk Failure

Temporary

SYIl1:lI1h'( .

Disk Failure

1. Fix the hardware problem. (Replace disks, re-cable, change HBA, ... j
2 Ensure that the OS recognizes the device
3. Force VxVM to scan for added devices: vxdctl

enable

4-a. Initialize a new drive.


vxdisksetup
-i device_name
4-b. Attach the disk media name
4. Reattach the disk media name
to the new drive.
to the disk access name.
-g diskgroup
-k adddisk
\
vxreattach
disk nemevoev i ce name
5. Recover the redundant volumes.
vxrecover

vxdg

6. Start any non-redundant volumes.


vxvol
-9 diskgroup
- start volume
7. Check data for consistency.
7. Restore non-redundant volume
fsck -F vxfs \
data from backup.
/dev/vx/rdsk/ diskgroup/vol wne

Resolving Disk Failures: Summary


Disk failures can be resolved by following

the process described in the slide.

I
Lesson

7 Resolving Hardware Problems


Copyright

if;

2006 Svmanter, Corporation All rights reserved

7-17

syrnantec

Disk Failure: Volume States After Attaching the Disk


vxprint

-g datadg

DG NAME

NCONFIG

DM NAME
RV
RL

NAME
NAME

DEVICE
RLINK
RVG

NAME

RVG

eNT

-ht
NLOG

MINORS

GRQUP- 10

TYPE
KSTATE
KSTATE

PRIVLEN
STATE
STATE

PUBLEN
PRIMARY
REM _ HOST

STATE
DATAVOLS
REM DG

KSTATE

STATE

LENGTH

READPOL

SRL
REM RLNK
PREFPLEX

PL

NAME

VOLUME

KSTATE

STATE

LENGTH

LAYOUT

SD

NAME

PLEX

DISK

DISKOFFS

LENGTH

[COLI]

SV

NAME

NVOLLAYR

LENGTH

[COLI

64000

95425080 3.2005. train06

PLEX

VOLNAME

dg datadg

default

defaul

dm datadgOl
dm datadg02

diskO 1
di.:kO_2

volOl
pl volOI-O!
ad datadgO1- 01
p I volOl-02
ad datadg02- 01

volOI
voIOI-O!
volOI
volOl-02

auto: cdsdisk
auto:cdadiak

4152640
4152640

1519

1519

ENABLED

ACTIVE

204900

SELECT

ENABLED

ACTIVE

205200

CONCAT

205200
205200
205200

0
CONCAT

DISABLBD

ACTIVE

pL

DISABLlID

RECOV2R

204900
205200
205200

CONCAT

datadg02 205200

MODE

DEVICE

MODE

OFF

AM/NM

MODE

datadgOl 0
DlSABLRD IOPAIL
datadg02 0

vol02
vo102-01
vol02
ad datadg02- 02 vo102-01

fsgen
RW

diskO _1

ENA

diskO 2

KNA

diskO -2

ENA

RW

fsgen

SELECT

UTYPE

NeOL/WID
OFF

RW

Volume States After Attaching the Disk Media


After reattaching the disk. volume and plex states are as displayed in the slide.
Notice the different states of volOl
and vol02.
The volOl
volume can still
receive I/O and contains a plcx in the IOFAIL state. This indicates that there was a
hardware failure underneath the plcx while the plcx was online.
Also notice that the only plcx ofvol02
has a state of RECOVER. This state
means that Vx VM believes that the data in this plex will need to be recovered. In a
temporary disk failure, where the disk may have been turned oil' during an I/O
stream, the data on that disk may still be valid. Therefore. do not always interpret
the RECOVER state in terms of bad data on the disk.

7-18

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copvnoht ,; 2006 Symantec Corporation

All

nqt-ts reserved

synuuuec

Disk Failure: Volume States During Recovery

I vxprint
I DG

-g datadg

-ht

NCONFIG

NLOG

DM

NAME
NAME

DEVICE

TYPE

RV

NAME

RLINK
RVG

I RL
V
I PL

NAME
NAME

GROUP-ID

MINORS
PRIVLEN

PUSLEN

_CNT KSTATE
KSTATE

STATE
STATE

PRIMARY
DATAVOLS
REM HOS T REM DG

SRL
REM

KSTATE

STATE

LENGTH

READPOL

PREFPLEX

UTYPE

RVG

STATE
RLNK

NAME
NAME

VOLUME

KSTATE

STATE

LENGTH

LAYOUT

NeOL/WID

MODE

SD

FLEX

DISK

DISROFFS

LENGTH

(COLI]

OFF

DEVICE

MODE

I SV

NAME

PLEX

VOLNAME

NVOLLAYR

LENGTH

[COLli

OFF

AM/NM

MODE

default

default

64000

95425 08 03.2005.

I .,: d~tadg
I dm datadgOl

I
i

elm

datadg02

volOl

pl

volOl-Ol

sd datadgOl-

I pi

, sd

vo101-02
datadg02

v
vol02
p L voI02

j sd

auto:cdsdisk

1519

4152640

auto: cdsdisk

1519

4152640

volOl

ENABLED

ACTIVE

204800

SELECT

ENABLED

ACTIVE

205200

CONCAT

01 volDl-Ol

datadgOl

volOl
- 01 volOl-02

ENABLED

STALE

datadg02

205200

DISABLED

ACTIVE

204900

SELECT

voI02

DISABLBD

RECOVER

205200

CONCAT

vol02-01

datadg02

205200

205200

-01

datadg02-

diskO
diskO

train06

02

Volume States After Recovering

205200
205200

fsgen
RW
diskO

diskO

_2

diskO

ENA
WO

CONCAT

ENA
fsgen
RW
ENA

Redundant Volumes

When you start the recovery on redundant volumes. the plcx that is not
synchronized with the mirrored volume has a state of ENABLED and STALE.
During the period of synchronization. the stale plex is write-only (WO). Aller the
synchronization is complete. the plcx state changes to ENABLED and ACTIVE.
and it becomes read-write (RW).

I
Lesson

7 Resolving

Hardware

Problems
Copyright

'1: 2006

7-19
Symautec

Coero-auon.

All rights

reserved

symantec

Intermittent Disk Failures

VxVM can mark a disk as failing if the disk is experiencing 1/0


is still accessible.

failures but
,~.----.

I~~~~~~
list

TYPE

DISK

GROUP

STATUS

datadg
datadg

online
online

1 . .-

idiskO 1
auto:cdsdisk
datadgOl
IdiskO:2_ .._.. au~o.:".<::'.<:Ii.."~
..~~.".~dg02

Disks marked as failing are not used for any new volume space.
To resolve intermittent disk failure problems:
- If any volumes on the failing disk are not redundant, attempt to
mirror those volumes:
If you can mirror the volumes, continue with the procedure for
redundant volumes.
If you cannot mirror the volume, prepare for backup and restore.

- If the volume is mirrored:


Prevent read 1/0 from accessing the failing disk by changing the
volume read policy.
Remove the failing disk.
Replace the disk.
Set the volume read policy back to the original policy.

Intermittent

Disk Failure

Intermittent disk failures are failures that occur off and on and involve problems
that cannot be consistently reproduced. Therefore, these types of failures are the
most difficult for the operating system to handle and can cause the system to slow
down considerably while the operating system attempts to determine the nature of
the problem. If you encounter intcrmiucnt failures, you should move data offofthe
disk and remove the disk from the system to avoid an unexpected failure later.
The method that you use to resolve intermittent disk failure depends on whether
the associated volumes are redundant or nonrcdundanl.

7-20

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyright ';' 2006 Sy!llar.(ec Corporation

All rights resorveo

symaruec.

Forced Removal
To forcibly

remove a disk and not evacuate the data:

1, Use the vxdiskadm option, "Remove a disk for


replacement." VxVM handles the drive as if it has
already failed,
2. Use the vxdiskadm option, "Replace a failed or
removed disk."
Using the command line:
vxdg

-k

-g

vxdisksetup

diskgroup
-i

rmdisk

Inew_device_namel

diskgroup
adddisk
Idisk_namel:lnew_device_namel

vxdg

-k

Idisk_namel

-g

Forced Removal
Ifvolumes are performing writes and each write is taking a long time to succeed
because of the intermittent failures, then the system may slow down significantly
and fall behind in its work, I I' this scenario occurs, you may need to forcibly
remove the disk and not evacuate the data:
Use the vxdiskadm option. "Remove a disk for replacement."
option, VxVM treats the drive as though it has already failed.

With this

The problem with using this command is that all volumes that have only two
mirrors (or that have a RAID-5 layout for redundancy) and that are using this
drive are no longer redundant until you replace the drive. During this period, if
a bad block occurs on the remaining disk, you cannot easily recover and may
have to restore from backup. You must also restore allnonredundant volumes
using the drive from backup.
2

you remove the drive. you must replace the drive in the same way as
when a drive completely fails. To replace a drive. you can use the vxdiskadm
option, "Replace a failed or removed disk."
After

Note: The state uf the disk is set to REMOVED when you use the vxdiskadm
option "Remove a disk for replacement." In terms of fixing the drive, the
REMOVED state is the same as NODEVICE.
You must use the vxdiskadm option,
"Replace a failed or removed disk," to replace the drive.

Lesson 7 Resolving Hardware Problems


Copvriqht '.c 2006 Svmantec Corporation. All rights reservoo

7-21

symantec

What Is Hot Relocation?


Hot Relocation: The system automatically reacts to 1/0
failures on redundant VxVM objects and restores
redundancy to those objects by relocating affected
subdisks.
Spare Disks
VM Disks

Subdisks are relocated to disks designated as


spare disks or to free space in the disk group.

Managing Hot Relocation at the Host Level


What Is Hot Relocation?
Hot relocation is a feature ofYxYM that enables a system to automatically react to
I/O failures on redundant (mirrored or RAID-5) YxYM objects and restore
redundancy and access to those objects. YxYM detects I/O failures on objects and
relocates the affected subdisks. The subdisks arc relocated to disks designated as
spare disks or 10 free space within the disk group. YxYM then reconstructs the
objects thai existed before the failure and makes them redundant and accessible
again.
Note: YxYM hot relocation is applicable when working with both physical disks
and hardware arrays. For example, even with hardware arrays if you mirror a
volume aeross LUN arrays, and one array becomes unusable, it is better 10
reconstruct a new mirror using Ih..: remaining array than to do nothing.
Partial Disk Failure
When a partial disk failure occurs (that is, a failure affecting only some subdisks
on a disk), redundant data on the failed pori Ion of the disk is relocated. Existing
volumes on the unaffected portions of the disk remain accessible. With partial disk
failure, the disk is not removed from YxYM control and is labeled as FAILING,
rather than as FAILED. Before removing a FAI LING disk for replacement. you
must evacuate any rcmaining \ olumcs on the disk.
Note: Hot relocation is only performed for redundant (mirrored or RAID-5)
subdisks on a tailed disk. Nonrcdundant subdisks on a failed disk are not relocated,
but the system administrator is notified of the failure.

7-22

VERITAS

Storage

Foundation

5.0 for UNIX: Fundamentals

~--a;:.'~-----'S}l1~lnll'l.
Hot-Relocation

Process

Volumes

VM Disks

1.
2.
3.
4.

vxrelocd
detects disk failure.
Administrator
is notified bye-mail.
Subdisks are relocated to a spare.
Volume recovery is attempted.

How Does Hot Relocation Work?


The hot-relocation feature is enabled by default. No system administrator action is
needed to start hot relocation when a failure occurs.
The vxrelocd daemon starts during system startup and monitors VxVM for
failures involving disks, plexes, or RI\ID-5 subdisks. When a failure occurs.
vxrelocd triggers a hot-relocation attempt and notifies the system administrator,
through e-mail, of failures and any relocation and recovery actions.
The vxrelocd

daemon is started from the S95vxvm- recover file (on Solaris).


/ etc/ rc. d/ rc2 . d/ S02VXVID-r ecover file (on Linux). or / sbin/
rc2 . d/S096vxvm-recover
(on IIP-UX).

The argument to vxrelocd is the list of people to e-mail notice ofa relocation
(default is root). To disable vxrelocd, you can place a "W' in front of the line in
the corresponding start-up file.
1\

successful hot-relocation process involves:

Failure detection:

Notification: Notifying the system administrator and other designated users


and identifying the affected Volume Manager objects
Relocation: Determining which subdisks can be relocated, finding space for
those subdisks, and relocating the subdisks (The system administrator is
notified of the success or failure of these actions. Ilot relocation does not
guarantee the same layout of data or the same performance after relocation.)

Detecting the failure of a disk, plex, or RAID-5 subdisk

Recovery: Initiating recovery procedures, if necessary. to restore the volumes


and data (Again, the system administrator is notified of the recovery auempt.)

7-23

Lesson 7 Resolving Hardware Problems


Copyright

~ 2006

Symantec

Corporation

All rights

reserved

symantec

How Is Space Selected?


Hot relocation attempts to move all subdisks from a
failing drive to a single spare destination disk.
If no disks have been designated as spares, VxVM
uses any available free space in the disk group in
which the failure occurs.
If there is not enough spare disk space, a
combination of spare disk space and free space is
used.
Free space that you exclude from hot relocation is
not used.

How Is Space Selected for Relocation?


When relocating subdisks. YxYM aucmpts to select a destination
fewest differences from the failed disk. YxYM:
Attempts to relocate to the same controller.
drive
2

Attempt

Attempts to relocate to the same controller,

Attempts

Potentially

target. and device as the failed

to relocate to the same controller

to relocate to a different

and target, but to a different

device

but to any target and any device

controller

scaners the subdisks to different

A spare disk must be initialized

disk with the

disks

and placed in a disk group as a spare before it can

be used for replacement purposes.


Hot relocation

attempts to move all subdisks from a failing drive to a single

spare destination

disk. if possible.

If no disks have been designated as spares. YxYM automatically uses any


available free space in the disk group not currently on a disk used by the
volume.
II' there is not enough spare disk space, a combination
tree space is used. Free space that you exclude

li'OI11

of spare disk space and


hot relocation

is not used.

In all cases, hot relocation attempts to relocate subdisks to a spare in the same disk
group, which is physically closest to the failing or failed disk.
When hot relocation occurs, the failed subdisk is removed from the configuration
database. The disk space used by the failed subdisk is not recycled as free space.

7-24

VERITAS

Storage

Foundation

Copvnqt-t ~.2006 Svmantoc Corpor auon. All right'>reserved

5.0 for UNIX: Fundamentals

symnruec.

Managing Spare Disks


VEA:
Actions->Set Disk Usage
vxdiskadm:
"Mark a disk as a spare for a disk group"
"Turn off the spare flag on a disk"
"Exclude a disk from hot-relocation use"
"Make a disk available for hot-relocation use"

ell:
To designate a disk as a spare:
vxedit

-g

diskCiroup

set

spare=onloff

disk

name

To exclude/include a disk for hot relocation:


vxedit

-g

diskgroup

set

nohotuse=onloff

disk

name

To force hot relocation to only use spare disks:


Add spare=only
to /etc/default/vxassist

Managing Spare Disks


When you add a disk to a disk group, you can specify that the disk be added to the
poul of spare disks available to the hot relocation feature ofVxVM.
Any disk in
the same disk group can use the spare disk. Try to provide at least one hotrelocation spare disk per disk group. While designated as a spare, a disk is not used
in creating volumes unless you specifically name the disk on the command line.

I
Lesson 7 ResolvingHardwareProblems
Copyright 2006 Svmantec Ccrporauon. All rights reserved

7-25

symantec

Unreloeating a Disk
VEA:

Select the disk to be unrelocated.

Select Actions->Undo

Hot Relocation.

vxdiskadm:
"Unrelocate

subdisks

back

to

a disk"

ell:
[-fl

vxunreloc
[-n

disk_namel

[-g diskgroupl
orig_disk_name

[-t

tasktagl

orig_disk_Ilal1le

Original disk before relocation

-n disk

Unrelocates to a disk other


than the original

-f

name

Forces unrelocation if exact


offsets are not possible

Unrelocating a Disk
The vxunreloc Utility
The hot-relocation feature detects I/O failures in a subdisk, relocates the subdisk.
and recovers the plex associated with the subdisk.
YxYM also provides a utility that unrelocatcs a disk-that
is, moves relocated
subdisks back to their original disk. After hot relocation moves subdisks tj'OI11 a
failed disk to other disks. you can return the relocated subdisks to their original
disk locations after the original disk is repaired or replaced.
Unrelocaiion

is performed using the vxunreloc utility. which restores the system

to the same configuration

that existed before a disk failure caused subdisks to be

relocated.

7-26

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyright G ;:006 Svmantec Corporation.

AU fights

reserved

,S}11Janti'l

Lesson Summary
Key Points
This lesson described how to interpret failures in
hardware, recover disabled disk groups, resolve
disk failures, and manage hot relocation at the host
level.
Reference Materials
- VERITAS

Volume Manager Administrator's

- VERITAS Storage Foundation

Guide

Release Notes

,syrnalltl"

Lab 7
Lab 7: Resolving Hardware Problems
In this lab, you practice recovering from a
variety of hardware failure scenarios, resulting
in disabled disk groups and failed disks.
First you recover a temporarily disabled disk
group, and then you use a set of interactive
lab scripts to investigate and practice
recovery techniques.

IFor
~r

Lab Exercises, see Appendix


Lab Solutions, see Appendix

A.
B.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions. "l.uh 7: [{c',)hill~.
Problems." l)<lgl' :\ -~ 7

l lurdv arc

Appendix B provides complete lab instructions and solutions. "I ab

[{C,,,l,'lllg

Solutions

I lardwarc Problems." i',lC'C H-~,"

Lesson 7 Resolving Hardware Problems


Copyright

(i;;, 2006

7-27
Symanter;

Corporation

All rights

reserved

VERITAS

7-28
Copyright

_ 2006

Svmautec

Storage Foundation
Corporation

All [191'IS reserved

5.0 for UNIX Fundamentals

Appendix A
Lab Exercises

A-2

VERITAS

Storage Foundation

5.0 for UNIX: Fundamentals

~ sym.uu;

Lab 1

Lab 1: Introducing the Lab Environment


In this lab, you are introduced to the lab
environment, system, and disks that you will use
throughout this course.

For Lab Exercises,


For Lab Solutions,

see Appendix
see Appendix

A.
B.

Lab 1: Introducing the Lab Environment


In this lab. you are introduced to the lab environment. the systems. and disks that
you will use throughout this course. You will also record some prerequisite
information that will prepare you for the installation ofVERITAS Storage
Foundation and the labs that follow throughout this course.
The Lab Snlilt;"lh

Ior Ihi, Llh ,n'c' Inc';lIcd Oil the 1,1110\\illl2 p;lge:

"Lab I C;"iUlioll': Illimdllc';ilg !lIe Lab Lnviroumcut." "age 1\-3

Lab 1: Introducing

A-3

the Lab Environment


Copyright

1-"-,2006

Symantcc

Corporation.

All rights

reserved

Lab Environment Introduction


The instructor will describe the classroom environment, review the configuration
and layout of the systems. and assign disks for you to use. The content of this
activity depends on the type of classroom, hardware. and the operating system(s)
deployed.

Lab Prerequisites
Record the following

information

to be provided by your instructor:

Object

Sample Value

root password

veritas

Host name

trainl

Domain name

classrooml.

Fully qualified hostname


(FQHI'\)

trainl.classrooml
.int

Host name of the system


sharing disks \\ ith my
system
(my partner system)

train2

\ly Boot Disk:

Solaris: cOt OdO

Your Value

int

HP-UX: clt15dO
AIX: hdiskO
l.inux: hda
2nd Internal

Disk:

Solaris: cOt2dO
HP-UX: c3t15dO
AIX: hdiskl
l.inux: hdb

\ly Data Disks:

Solaris: c It #dO clt#d5


HP-UX c4tOdO c4tOd5
AIX:hdisk21hdisk26

Liuux: sda

A-4

VERITAS
Copynght .; ?()06 Svmantec

sdf

Storage
Copornnon

Foundation
AU rights reserveu

5.0 for UNIX: Fundamentals

Location of Storage
Foundation 5.0
Software:

/student/
software/sf/sf50

Location of Lab Scripts:

/student/labs/sf/
sf 50

Location of the fp
program:

/student/labs/sf/
sf50/bin

Location of VERITAS
Storage Foundation
license keys:

Istudent/
software/license/
sf 50 - entr - lic.txt

Lab 1: Introducing

A-5

the Lab Environment


Copynqht

C 2006

Symantec

Corporation.

All

rights te

servr-rt

Instructor Classroom Setup


Perform the following steps to enable zoning configurations
Foundation 5-day course (not required for High Availability
1

Use coursesetup
script:
Select Classroom.
(Setup scripts are all included in Classroom SAN configuration
Select

Function

- Select

Zoning

by Zone Name
by Course

Name

Select option 3 - Select/Check Hostgroup Configuration.


Select
HostGroup Configuration
to be Configured:
1 - Standard
Mode: 2 or 4 node sharing,
No DMP
2
DMP Mode: 2 node sharing,
switchable
between
path access
3 - Check active

Version 2).

To Perform:

2 - Select
Zoning and Hostgroup
Configuration
3 - Select/Check
Hostgroup
Configuration

for the Storage


Fundamentals):

HDS Hostgroup

1 path

and 2

Configuration

Select option 2 - DMP Mode. Wait and do not respond to prompts.

Exit to first level menu,

Select option I - Select Zoning by Zone Name.


Select

Zoning

Configuration

Required:

1 - Mode 1: 6 sets of 2 Systems sharing


12 LUNs, no Tape Library
available
(HDS DMP Mode
6 x 2 Systems - Single
Path to 12
LUNs)
Mode 2: 3 sets of 4 Systems sharing
24 LUNs, no Tape Library
available
IHDS DMP Mode - 6 x 2 Systems - Dual Paths to 12 LUNs)

Select option I - Mode I (single path to 12 LUNs).

Select option 4 - Sularis as the OS.

Exit out of the course_setup

Rebout each system using reboot

A-6

script.

VERITAS
Copyrlght'~

2006

Symautec

- - - r.

Storage
Corpor

anon

Foundation
All riglil'>

ies.uvoa

5.0 for UNIX: Fundamentals

'symam('(.
Lab 2

Lab 2: Installation and Interfaces


In this lab, you install VERITAS Storage
Foundation 5.0 on your lab system. You also
explore the Storage Foundation user interfaces,
including the VERITAS Enterprise Administrator
interface, the vxdiskadm menu interface, and the
command-line interface .

. For Lab Exercises,


For Lab Solutions,

Lab 2: Installation

see APpendix-A:!
see Appendix

~~

and Interfaces

In this exercise. you install VERITAS Storage Foundation S.Uon yuur lab system.
You also explore the VxVM user interfaces, including the VERITAS Enterprise
Administrator interlace. the vxdiskadm menu interface. and the command line
interface.
The Lal.' S"llIilUlh
"Lab 2

S"luilOIl<

Ill] this Ldl dr,' I,',alui


IlhlalLuinil

Oil

the

1(11Iu\\IIH2 rage:

P:lg,' H'

:Illd Illll'llucl'."

Prerequisite Setup
To perform this lab. you need a lab system with the appropriate operating system
and patch sets pre-installed. At this point there should be no Storage Foundation
software installed on the lab system. The lab steps assume that the system has
access to the Storage Foundation S.O software and thai you have a Storage
Foundation S.OEnterprise demo license key that can be used during installation.

Lab 2: Installation

A-7

and Interfaces
Copyright <l'.2006 Syrnantec Corporation

All

nqtus rese-veo

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference. you may record the information here. or refer
back to the first lab where you initially documented this information.

Object

Sample Value

root password

veritas

Host name

trainl

Domain name

c La s s room t . int

Fully qualified hostname


(FQHl'\)

trainl.classrooml
.int

My Boot Disk:

Solaris: cOtOdO

Your Value

HPUX: clt15dO
AIX:hdiskO
Linux: hda

A-8

Location of Storage
Foundation 5.0
Software:

/student/
software/sf/sf50

Location of VERITAS
Storage Foundation
license keys:

/student/
software/license/
sf50 - entr - lic.txt

Location of Lab Scripts:

/student/labs/sf/
sf50

VERITAS

Storage

Copynqht ',:, 2006 SY'1l8111ecCornoranon

Foundation

All nqhts reserved

5.0 for UNIX: Fundamentals

Preinstallation
Determine if there are any VRTS or
system.

SYMC

packages currently installed on your

Before installing Storage Foundation. save the following important system


files into backup files named with a". preVM" extension. Also. save your
boot disk information to a file lor later use (do not store the file in /tmp). You
may need the boot disk information when you bring the boot disk under V.xVM
control in a later lab.

Are any VERITAS license keys installed on your system'! Check for installed
licenses.

To test if ONS is configured in your environment. check if nslookup


resolves the hostname to a fully qualified hostname by typing nslookup
hostname.If there is no ONS or if the hosmamc cannot be resolved to a fully
qualified hostname, carry out the following steps:
a Ensure that the fully qualified hostnamc is listed in the
/ ete/hosts
file. For example.
eat

jete/hosts

192.168.xxx.yyy

tlain#.domain

train#

where domain is the domain name used in the classroom. such as


elassrooml.

into

I f the fully quali ficd hostname is not in the / et e/hos t s file. add it as an
alias to hostname.
b

Change to the directory containing lab scripts and execute the


script. This script ensures that your lab system only uses
local files for name resolution.

prepare_ns

Lab 2 Installation

A-9

and Interfaces
Copyright <h 20(10 Symantec cc.roreuoo

All fignls resorvon

Installing VERITAS Storage Foundation


Navigate to the directory containing the Veritas Storage Foundation software.
Ask your instructor for the location of the installer script. Using the
installer script, run a prechcck to determine if your system meets all
preinstallauon requirements. If any requirements (other than the license
software not being installed) arc not met, follow the instructions to take any
required actions before you continue. Note that you can look into the log tile
created to see the details of the checks the script performs.
2

Navigate to the directory containing the Veritas Storage Foundation software.


Install and perform initial configuration of Storage Foundation (Vx VM and
VxFS) using the following steps:
a

Start the installer

script.

Select I for InstaJIIUpgrade a Product option.

Select the Veritas Storage Foundation software to install.


On the HP-UX platform, confirm that you wish to continue the
installation of this version.

Enter the name of your system when prompted.

Obtain a license key from your instructor and record it here. Type the
license key when prompted.
License Key:
Enter n when you are asked if you want to enter another license key.
Select to install All Veritas Storage Foundation packages when prompted.

Press Return to scroll through the list of packages.

Accept the default of y to configure SF.

IIP-lIX

Do not set

On the HP-UX platform. the installer script starts the software


installation without asking any configuration questions. When the
software installation is complete, it prompts you to reboot your
system. Continue with the configurarion usiug . / installer
- conf igure alter the system is rebooted.

lip

enclosure-based naming tor Volume Manager.

VERITAS

A-10
Copvnqbt

('0

;.1006

Symantec

Storage
Corporation.

Foundation
All rights

reserved

5.0 for UNIX: Fundamentals

Do not set up a default disk group.


k

Ifan error message is displayed that the fully-qualified


be queried. press return to continue.

host name could not

Obtain the domain name from your instructor and type the fully qualified
host name of your system when prompted. For example:
train5.classrooml.int
m Do not enable Storage Foundation Management Server Management. The
system will he a standalone host.
n

Select y to start Storage Foundation processes,

Wait while the installation proceeds and processes are started.

When the installation script completes. you will be asked to reboot your
system. Perform the next lab step (lab step 3) to modify the root profile
before rebooting your system.

This step is only for North American Mobile Academy lab environment. If
you are working in a different lab environment. skip this step.
If you are working in a North American Mobile Academy lab environment
with iSCSI disk devices. change to the directory containing the lab scripts
and execute the iscsi_setup lab script. This script disables DMP
support for iSCS[ disks so that they can be recognized correctly by Volume
Manager.
Only if you are working in a North American Mobile Academy lab
environment:
cd /location_of_lab_scripts

./iscsi
3

setup

Check in /. profile to ensure that the following paths are present.


Note: Your lab systems may already be configured with these environment
\'ariable settings. However. in a real-Ii fe environment you would need to carry
out this step yourself.

Reboot your system.

Lab 2: Installation

and Interfaces
Ccpyrigl11 ~ 2006 Symantec Corporation. All riqhts reserved

A-11

Setting Up VERITAS Enterprise Administrator


Is the YEA server running" Ifnot. start it.
2

Start the YEA graphical user interface.


Note: On some systems, you may need to configure the system to use the
appropriate display. For example. if the display is pel: O. before you run
YEA, type:
DISPLAY=pcl:O
export
DISPLAY
It is also important that the display itself is configured to accept connections
from your client, If you receive permission errors when you try to start YEA.
in a terminal window on the display system, type:
xhost system or xhost +
where system is the hostname of the client on which you are running
vea command.

the

In the Select Protile window. click Manage Profiles button and contigure YEA
to always start with the Default profile.

Click "Connect to a Host or Domain link" and connect to your system as


root. Your instructor provides you with the password.

On the left pane (object tree) view. drill down the system and observe the
various categories ofYxYM objects.

Select the Assistant perspective on the quick access bar and view tasks for
systemname/StorageAgent.

Using the System perspective find out what disks are available to the OS.

Execute the Disk Scan command and observe the messages on the console
view. Click on a message to see the details.

What commands were executed by the Disk Scan task?

10 Exit the YEA graphical interface.


11 Create a root
YEA.

A-12

equivalent administrative

VERITAS
Cop~nght ,~ 2006 Svmantec

Storage
CDJpOnllOIl

account named adminl

Foundation
All rights resorvco

for use of

5.0 for UNIX: Fundamentals

12 Test the new account. After you have tested the new account, exit VEA.

Exploring vxdiskadm
From the command line. invoke the text-based VxVM menu interface.
2

Display information about the menu or about specific commands.

What disks are available to the OS)

Exit the vxdiskadm

Lab 2: Installation

interface.

and Interfaces
Copyright 'E 2006 Svmantec Corporation All riglll" reserved

A-13

Optional Lab: Accessing CLI Commands


Note: This exercise introduces several commonly used VxVM commands. These
commands and associated concepts arc explained in detail throughout this course.
If you have used Volume Manager before, you may already be familiar with these
commands. If you are new to Volume Manager, this exercise aims to show you the
amount of information you can get from the manual pages. Note that you do not
need to read all of the manual pages for this exercise.
From the command line. invoke the VxVM
vxassist command.

manual pages and read about the

What vxassist

command parameter creates a VxVM

volume?

From the command line, invoke the VxVM


vxdisk command.

What disks arc available to VxVM'.'

From the command line, invoke the Vx VM manual pages and read about the
vxdg command.

How do you list locally imported disk groups'.'

From the command line, invoke the VxVM


vxprint command.

manual pages and read about the

manual pages and read about the

Optional Lab: More Installation Exploration


When does the VxVM

license expire?

What is the version and revision number of the installed version of V x VM'.'

Which daemons are running after the system boots under VxVM

A-14

VERITAS
C[)pynf~hlc

lOOA

Svrnantec

Storage Foundation
Cornoranon

All

nqhls

.r-sorvco

control?

5.0 for UNIX. Fundamentals

'synmntc(.
Lab 3

Lab 3: Creating a Volume and File System


In this lab, you create new disk groups, simple
volumes, and file systems, mount and
unmount the file systems, and observe the
volume and disk properties.
The first exercise uses the VEA interface. The
second exercise uses the command-line
interface.

For Lab Exercises, see Appendix A.


For Lab Solutions, see App.!llldix ,!!._

Lab 3: Creating a Volume and File System


In this lab. you create new disk groups. simple volumes. and file systems. mount
and unmount the file systems. and observe the volume and disk properties. The
first exercise uses the VEA interface. The second exercise uses the command line
interface.
I'hc lal' S"llllie"b

lor lili, hh .uc Ipealed on the i(lllu\\ in!! p:Jgc:

"Lab"" S,.[utiolls: (rc;rlillg;1

volume ami Fil,:

SlsICIH."

page H-.:'I

If you use object names other than the ones provided. substitute the names
accordingly in the commands.
Caution:

In this lab. do not include the boot disk in any of the tasks.

Prerequisite Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this. you also need four empty and unused
external disks to be used during the labs.
Note: Although you should not have to perform disk labeling. here are some tips
that may help if your disks are not properly formatted:
On Solaris, use the forma t command to place a label on any disks that are not
properly labeled for use under Solaris, Ask the instructor for details.
On Linux, if you have problems initializing a disk. you may need to run this
command: fdisk /dev/disk.
Use options - 0 and -w to write a new DOS partition table. (The disk may have
previously been used with Solaris.)
Lab 3: Creating

a Volume and File System


Copyright'~ 2006 Symantec Corporation All rlght<; reserved

A-15

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference, you may record the information here, or refer
back to the first lab where you initially documented this information.

Object

Sample Value

root password

veritas

Host name

trainl

'Vly Data Disks:

Solaris: c lt #dO

Your Value

clt#d5
HP-UX: c4 t OdO c4tOd5
AIX: hdisk21hdisk26
l.inux: sda - sdf
Prefix to be used with
object names

A-16

name

VERITAS

Storage

COp\'rl!Jhl ',~2006 Svmanter cope-anon

Foundation
All flghlS reserved

5.0 for UNIX: Fundamentals

Creating a Volume and File System: VEA


Run and log on to the VEA interface as the root user.
2

View all the disk devices on the system. What is the status of the disks assigned
to you for the labs?

Select an uninitializcd disk and initialize it using the VEA. Observe the change
in the Status column. What is the status of the disk now')

Create a new disk group using the disk you initialized in the previous step.
Name the new disk group namedgl.Observe the change in the disk status.
Note: I f you arc sharing a disk array. make sure that the prefix you are using
for the disk group names is unique.

Using VEA create a new volume of size I gin namedgl.Name the new
volume namevoll. Create a fi Ie system on it and make sure that the file
system is mounted at boot time to / name1 directory.

Check if the file system is mounted and verify that there is an entry for this file
system in the file system table.

View the properties of the disk in the namedgl disk group and note the
Capacity and the Unallocated space fields.

Try to create a second volume, namevo12.in the namedgl and specify a


size slightly larger than the unallocated space on the existing disk in the disk
group. for example 4g in the standard Symantcc classroom systems. Do not
create a file system on the volume. What happens')

Add a disk to the namedgl disk group.

10 Create the same volume. namevo12.in the namedgl disk group using the

same size as in step R. Do not create a file system.


11 Observe the volumes by selecting the Volumes object in the object tree. Can

you tell which volume has a mounted file system'!


12 Create a VxFS file system on namevo12and mount it to / name2directory.

Ensure that thc lile system is not mounted at boot time. Check if the /name2
file system is currently mounted and verify that it has not been added to the file
system table.

Lab 3: Creating

A-17

a Volume and File System


Copyright

2006 Symanlec

Corporation

All figtlls reserved

13 Observe the commands that were executed by VEA during this section of the

lab.

Creating a Volume and File System: CLI


View all the disk devices on the system. What is the status of the disks assigned
to you for the labs?
2

Select an uninitializcd disk and initialize it using the CLI. Observe the change
in the Status column. What is the status of the disk now'?

Create a new disk group using the disk you initialized in the previous step.
Name the new disk group namedg2. Observe the change in the disk status.
Note: I f you arc sharing a disk array, make sure that the prefix you are using
for the disk group names is unique.

Using the vxassist


command, create a new volume of size Ig in namedg2.
Name the new volume namevo13.

Create a Vcritas file system on thc namevo13 volume, mount the file system
to the / name3 directory.
Make sure that the file system is mounted at boot time.

Unmount the / name3 tile system. verify the unmount, and remount using the
-a command to mount all file systems in the file system table.

mount
7

Identify the amount of free spacein the namedg2 disk group. Try to create a
volume in this disk group named namevo14 with a size slightly larger than
the available free space. lor example 5g on standard Symantec classroom
systems. What happens'!
Note: The disk sizes in Syrnantcc Virtual Academy lab environments are
slightly less than 2g. Ensure that you use the correct value suitable to your
environment instead of the 5g example used here.

A-18

Initialize a new disk and add it to the namedg2 disk group. Observe the
change in tree space.

Create the same volume. namevo14, in the namedg2 disk group using the
same size as in step 7.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


CO~y!lght ,::;200fi Svmanrcc Corporanoo

All rIghts reserved

10 Display volume information for namedg2disk group using the


vxprint
-g namedg2 -htr command. Can you identify which disks arc
used for which volumes')
11 List the disk groups on your system using the vxdg list command.

Removing Volumes, Disks and Disk Groups: eLl


Unmount the / name3file system and remove it from the file system table.
2

Remove the namevo14volume in the namedg2disk group. Observe the disk


group configuration information using the vxprint
-g namedg2 -htr
command.

Remove the second disk (namedg2


Observe the change in its status.

Destroy the namedg2disk group.

Observe the status of the disk devices on the system.

02)

from the namedg2 disk group.

Removing Volumes, Disks and Disk Groups: VEA


Unmount both / namel and / name2 file systems using YEA. Accept to
remove the file systems from the tile system table ifprompted. Check if the file
systems are unmounted and veri fy that any corresponding entries have been
removed from the file system table.
a
b
c
d

Lab 3: Creating

Select the File Systems node in the object tree and select / narnel file
system.
Select Actions->Unmount
File System.
Confirm the unmount and select Yes when prompted to remove it
from the file system table.
Select the /name2 file system. Select Actions->Unmount
File System.
Confirm the unmount.

a Volume and File System


Copyright 'G 2006 svmeotcc Corporation. All nqhts reserved

A-19

Both file systems should disappear from the file system list in VEA. You
can use the command line tu verify the changes as follows:
Solaris

mount

HP-UX.
Linux

mount

cat

cat

/etc/vfstab

/etc/fstab

The / namel and / name2 file systems should nut be among the mounted
fill' systems. and the file system table should nut contain any entries
currespunding tu these tile systems.
2

Remove the namevo12 volume in the namedgl disk group.


a

Select the Volumes node in the object tree and select namevo12
volume.

Select Actiuns->Delete

Volume. Confirm when prompted.

Select the Disk Groups node in the object tree and observe the disks in the
namedgl disk group. Can you identify which disk is empty'?
The %Used column should show 0% for the unused disk which is the
second disk in the disk gruup (namedgl02).

Remove the disk yuu identified as empty from the namedg1 disk group.
Select the empty disk and select Actions->Remove
Gruup.

Disk From Disk

Observe all the disks on the system. What is the status of the disk you removed
from the disk group')
Select the Disks node in the object tree and observe the disks in the right
pane view.
The disk removed in step 4 shuuld be in Free state.

Destroy the namedgl disk group.


a

Select the Disk Gruups node in the object tree and the namedgl disk
group in the right pane view.

Select Actions->Destroy

Disk Group. Confirm when prompted.

Observe all the disks on the system. What is the status of the disks?
Select the Disks node in the object tree and observe the disks in the right
pane view.
If you have followed all the lab steps, you should have 4 disks in Free state;
they are already initialized but nut in a disk group.

A-20

VERITAS
Copyright

20(}6 Symantec

Storage
Corporation

Foundation
AU ".!t'l~

reserveo

5.0 for UNIX: Fundamentals

symaruec

Lab 4

Lab 4: Selecting Volume Layouts


In this lab, you create simple concatenated
volumes, striped volumes, and mirrored
volumes.
You also practice creating a layered volume and
using ordered allocation while creating volumes.

For Lab Exercises, see Appendix A.


For Lab Solutions, see Appendix B.

Lab 4: Selecting Volume Layouts


In this lab. you create simple concatenated volumes. striped volumes. and mirrored
volumes. You also practice creating a a layered volume and using ordered
allocation while creating volumes.
The Lab Solution-, tor

ihi-,

'Tab -l SuIUl!PIlS: S,icctlllg

lab arc

locakd

on the

1()llp\\

ing J1dgc:

volume layouts." page B-.~.\

Prerequisite Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this. you also need four empty and unused
external disks to be used during the labs.

Lab 4: Selecting

Volume

Layouts
Copyng"1 .:; 2006 svroentec Corporation. All fights reserved.

A-21

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference, you may record the information here, or refer
back to the first lab where you initially documented this information.

Object

_._-1'001

password

Sample Value

Your Value

veritas

Host name

trainl

;\Iy Data Disks:

Solaris: c It #dO clt#d5


HP-UX: c-i t odo
c4tOd5

AIX: hdisk21hdisk26
Linux:
Prefix to be used with
object names

A-22

sda

sdf

name

VERITAS

Storage

Foundation
.:.,

reserved

5.0 for UNIX: Fundamentals

Creating Volumes with Different Layouts: CLI


Add four initialized disks to a disk group called namedg. Verify your action
using vxdisk
-0 alldgs
list.
Note: If you are sharing a disk array. make sure that the prefix you are using
for the disk group name is unique.
2

Create a 50-MB concatenated volume in namedg disk group called


with one drive.

namevoll

Display the volume layout. What names have been assigned to the plex and
subdisks?

Remove the volume.

Create a 50-MB striped volume on two disks in namedg and specify which
two disks to use in creating the volume. Name the volume namevo12.
What names have been assigned to the plex and subdisks?

Create a 20-MB. two-column striped volume with a mirror in namedg. Set the
stripe unit size to 256K. Name the volume namevo13.

Create a 20-MB. two-column striped volume with a mirror. Set the stripe unit
size to 12RK. Select at least one disk that you should not use. Name the volume
namevo14.

Was the volume created?


8

Create a 20-MB 3-column striped volume with a mirror. Specify three disks to
be used during volume creation. Name the volume namevo14.
Was the volume created')

Create the same volume specified in the previous step. but without the mirror.
What names have been assigned to the plcx and subdisks?

10 Remove the volumes created in this exercise.


11 Remove the disk group that was used in this exercise.

Lab 4: Selecting

Volume Layouts
Copyright~' 2U06 Symantec Corporation. All rights rcsorveo

A-23

Creating Volumes with Different Layouts: VEA


If you had exited out of VEA, start it and connect back to your system.
2

Add four initialized disks to a disk group called namedg. Verify your action in
the main window.
Create a 50-MB concatenated volume in namedg disk group called
with one drive.

namevoll

Display the volume layout. Notice the naming convention ofthe plex and
subdisk.

Remove the volume.

Create a 50-MI3 striped volume on two disks in namedg, and specify which
two disks to use in creating the volume. Name the volume namevo12.
View the volume.

Create a 20-MB, two-column striped volume with a mirror in namedg. Set the
stripe unit size to 256K. Name the volume namevo13.
View the volume. Notice that you now have a second plcx.

Create a 20-MI3, two-column striped volume with a mirror. Set the stripe unit
size to 128K. Select at least one disk you should not use. Name the volume
namevo14.

Was the volume created?


9

Create a 20-MI3 3-column striped volume with a mirror. Specify three disks to
be used during volume creation. Name the volume namevo14.
Was the volume created'!

10 Create the same volume specified in step 9, but without the mirror.

Note: If yo II did not cancel out of the previous step, then just uncheck the
mirrored option and continue the wizard.
Was the volume created')
11 Delete all volumes in the namedg disk group.

A-24

VERITAS
COPYrighl

,.;: 2(1(1t} Svo-anrec

Storage

Foundation

Corporation AIlIlY!1IS reserved

5.0 for UNIX: Fundamentals

12 View the commands executed by VEA during this section of the lab.

Creating Layered Volumes


You can complete this exercise using either the VEA or the CLI interface.
Solutions arc provided lor both.
Note: In order to perform the tasks in this exercise. you should have at least four
disks in the disk group that you are using.
First. ensure that any volumes created in the previous Jabs arc removed from
the namedgdisk group.

Create a IOO-MB Striped Mirrored

volume with

110

logging. Name the volume

namevoll.
3

Create a Concatenated Mirrored volume with no logging called namevo12.


The size of the volume should be greater than the size of the largest disk in the
disk group; lor example. if your largest disk is 4 UB. then create a ()-UB
volume.
Note: If you arc working in the Virtual Academy (VA) lab environment, your
largest disk will have a size 01'2 GB. In this environment. you can use a 3GB
volume size.

If you are using VEA. view the commands executed by VEA to create the

namevo12 volume during this section of the lab.


5

View the volumes and compare the layouts.

Remove all of the volumes in the namedgdisk group.

Lab 4: Selecting

Volume

A-25

Layouts
Copyright

200(;

Swnanter

Corporation

All rights

reserved

Using Ordered Allocation While Creating Volumes


You can complete this exercise using either the VEA or the CLI interface.
Solutions are provided for both.
Nute: In order to perform the tasks in this exercise, you should have at least tour
disks in the disk group that you are using.
Create a 20-I\1B. two-column striped volume with a mirror in the namedg disk
gruup. Name the volume namevoll.

A-26

Display the volume layout. How are the disks allocated in the volume'} Which
disk devices are used'?

Remove the volume you just made. and re-create it by specifying the four disks
in an order different from the original layout. Use the command line to create
the volume in this step.

Display the volume layout. How arc the disks allocated this time')

Remove all of the volumes in the namedg disk group.

VERITAS

Storage

CopynqN ,~,2006 Svmantec corooreuoo

Foundation
All nobts reserved

5.0 for UNIX: Fundamentals

Optional Lab: Creating Volumes with User Defaults: CLI


This optional guided practice illustrates how to use the files:

/ete/default/vxassist

/ete/default/alt_vxassist

to create volumes with defaults specified by the user. Note that some of the default
values may nut apply to VEA because VEA uses explicit values for number of
columns. stripe unit size. and number of mirrors while creating striped and
mirrored volumes.
Create two files in lete/default:
a

lh~
Ji)!ldw!ng:
# when

c'dilOL cTc:lil' ,I iik

V.1.

mirroring

create

c'a!b!

''::':cIS,::LE;t

three

11;,;[ inc'ludl"

lill'

mirrors

nmirror=3

bUsing

llie

vi cduor.crc.uc,; tile called

,,1

v xa s si s r. Ih:1I iurludc- the

loll(l\\lllg:

# use

256K

# regular

as the
volumes

default

stripe

unit

size

for

stripeunit:256k
2

Use these files when creating the following volumes:


a

Create a I OO-MB volume called namevoll

using Layout em.i r r o r:

Create a I OO-M8. two-column stripe volume called namevol2 using - d


al t _ vxassist
so that Volume Manager uses the default file:

View the layout of these volumes using VEA or by using vxprint


What do you notice?

-g

namedg -htr.

Remove any vxassist


default files that you created in this optional lab
section, The presence of these tiles can impact subsequent labs where default
behavior is assumed.

Remove all of the volumes in the namedg disk group.

A-27

Lab 4: Selecting Volume Layouts


Copyright 2006 Sv-nantec Corporation

All rights reserved

A-28

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynght -: 200fi

Svmaotec

Corpoeauon

All fights

reserved

'symant('(

Lab 5
Lab 5: Making Basic Configuration

Changes

This lab provides practice in making basic


configuration changes.
In this lab, you add mirrors and logs to
existing volumes, and change the volume
read policy. You also resize volumes, rename
disk groups, and move data between systems.

For Lab Exercises, see Appendix A.


For Lab Solutions, see Appendix B.

Lab 5: Making Basic Configuration

Changes

This lab provides practice in making basic configuration changes. In this lab. you
add mirrors and logs to existing volumes. and change the volume read policy. You
also resize volumes. rename disk groups. and move data between systems.
Tile:

Lab Solutiu1ls torthr-

lab arc

I()c<tkd

PII lil,'

"L;111:, Solutions: \lakinf!. fl,,,i,, Cuni"iguFlti(lli

!"II,)\\,lIlg

Ckln;!cs."

I'<lg,':

I"lg(' [\--+7

Prerequisite Setup
To perform this Jab.you need a lab system with Storage Foundation pre- installed.
configured and licensed, In addition to this. you also need fuur external disks to be
used during the labs.
At the beginning ofthis lab. you should have a disk group called namedg that has
four external disks and no volumes in it.

Lab 5: Making Basic Configuration

Changes

Copyngh! 2006 Symanter; Corporation All rights reservecl

A-29

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference, you may record the information here, or refer
back to the first lab where you initially documented this information.

Object

Sample Value

root password

veritas

Host name

trainl

Host name of the system


sharing disks with my
system
(my partner system)

train2

My Data Disks:

Solaris: clt#dO
clt#d5

Your Value

HP-UX: c4tOdO
c4tOd5

AIX:hdisk21hdisk26
Linux:
2nd Internal

Disk:

sda

sdf

Soluris: cOt2dO
HP-UX: c3t15dO
AIX: hdiskl

Linux: hdb
Location of Lab Scripts
(if any):

/student/labs/sf/
sf50

Prefix to be used with


object names

name

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

A-3D
ClJpynghl';

2006 Svmantec Corporation. All nqt-ts reserved

Administering Mirrored Volumes


You can complete this exercise using either the YEA or the CLI interface.
Solutions are provided for both.
Note: In order to perform the tasks in this exercise, you should have at least four
disks in the disk group that you are using.
1 Ensure that you have a disk group called namedgwith four disks in it. I r not.
create the disk group using four disks.
Note: I f you have completed the previous lab steps you should already have
the namedgdisk group with four disks and no volumes.
2

Create a 50-MB, two-column striped volume called namevol1 in namedg.

Display the volume layout. "ow are the disks allocated in the volume'! Note
the disk devices used for the volume.

Add a mirror to namevol1.and display the volume layout. What is the layout
of the second plcx? Which disks are used for the second plex?

Add a dirty region log to namevol1 and specify the disk to use lor the DRL.
Display the volume layout.

Add a second dirty region log to namevol1 and speci fy another disk to use
for the DRL. Display the volume layout.

Remove the first dirty region log that you added to the volume. Display the
volume layout. Can you control which log was removed?

Find out what the current volume read policy for namevol1 is. Change the
volume read policy to round robin, and display the volume layout.

Remove the original mirror (namevol1- 01) from namevoll. and display
the volume layout.

10 Remove namevoll.

Resizing a Volume
You can complete this exercise using either the YEA or the ell
Solutions are provided for both.

Lab 5: Making Basic Configuration


Copynght

A-31

Changes
~ 2006

Svmamec

interface.

Corporation

AlIlIgtlls

reserved

I f you have not already done so. remove the volumes created in the previous
lab in namedg.
2

Create a 20-MB concatenated mirrored volume called namevoll in namedg.


Create a Veritas tile system on the volume and mount it to / namel. Make sure
that the file system is not added to the tile system table.

View the layout of the volume and display the size of the tile system.

Add data to the volume by creating a file in the tile system and verify that the
fi le has been added.

Expand the tile system and volume to 100 MB. Observe the volume layout to
see the change in size. Display f lc system size.

Resizing a File System Only: ell


Note: This exercise should be performed using the command line interface
because the VEA docs not allow you to create a tile system smaller in size than the
underlying volume. You also cannot change the size of the volume and the tile
system separately using the GUI.
1

Create a 50-MB concatenated volume named namevo12 in the namedg disk


group.

Create a Veritas tile system on the volume by using the mkfs command.
Specify the tile system size as 40 MB.

Create a mount point / name2 on which to the mount the tile system. if it does
not already exist.

Mount the newly created tile system on the mount point /name2.

Verify disk space using the df command (or the bdf command on HP-L!X).
Observe that the total size of the tile system is smaller than the size of the
volume.

Expand the tile system to the full size of the underlying volume using the
f sadm - b news i ze option.

Verify disk space using the df command (or the bdf command on HP-UX).

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

A-32
COPYflgh!

;!O06 Svmuun-c Corporauoo All "Uf'l~ reserved

Make a lile on the file system mounted at /name2. so that the tree space is less
than 50 percent of the total file system size.

Shrink the file system to 50 percent of its current size. What happens"

Renaming a Disk Group


You can complete this exercise using either the VEA or the CLI interface.
Solutions are provided for both.
1

Try to rename the namedg disk group to namedgl while the /namel and
/ name2 file systems are still mounted. Can you do it'!

Observe the contents of the / dev /vx/ rdsk and / dev /vx/ dsk directories
and their subdirectories. What do you see')

Unmount all the mounted file systems in namedg disk group.

Rename the namedg disk group to namedgl. Do not forget to start the
volumes in the disk group after the renaming if you are using the command
line interface.

Observe the contents of the / dev /vx/ rdsk and / dev /vx/ dsk directories
and their subdirectories. What has changed?

Observe the disk media names. Is there any change?

Mount the / namel and / name2 file systems. and observe their contents.

Moving Data Between Systems


You can complete this exercise using either the VEA or the CLI interface.
Solutions are provided for both.
Note: If you arc sharing a disk array. each participant should make sure that the
prefix used for object names is unique.
Copy new data to the / namel and / name2 file systems. For example. copy
the jete/hosts
file to /namel and the jete/group
file to /name2.
2

View all the disk devices on the system.

Lab 5: Making Basic Configuration

A-33

Changes

Copyright

I,,'

2006 Syrnantec Corporation

All rigl11sreserved

Unmount all file systems in the namedgl disk group and deport the disk
group. Do not give it a new owner. View all the disk devices on the system.

Identify the name of the system that is sharing access to the same disks as your
system. I f you are not sure. check with your instructor. Note the name of the
partner system here.
Partner system hostnamc:

Using the command line interface, perform the following steps on your partner
system:
Note: If you are working on a standalone system, skip step a in the following
and use your own system as the partner system.
a

Remote login to the partner system.

Import the namedgl disk group on the partner system, start the volumes in
the imported disk group. and view all the disk devices on the system.

While still logged in to the partner system, mount the /namel and
/ name2 tile systems. Note that you will need to create the mount
directories on the partner system before mounting the file systems. Observe
the data in the fi Ie systems.

Unmount the file systems on your partner system.

On your partner system, deport namedg1 and assign your own machine
name, for example. trainS. as the New host.
Exit from the partner system.

On your own system import the disk group and change its name back to
namedg. View all the disk devices on the system.

Deport the disk group namedg by assigning the ownership to anotherhost.


View all the disk devices un the system. Why would you do this?

From the command line display detailed information about one of the disks in
the disk group using the vxdi sk 1 is t device_ tag command. Note the
hos t id field in the output.

Import namedg. Were you successful?

VERITAS Storage Foundation 5.0 for UNIX.-Fundamentals

A-34
Copynght

~~ LOOf) Symantec

Ccrporauon

All nqhts

reserved

10 Now import namedg and overwrite the disk group lock. What did you have to

do to import it and why?


11 From the command line display detailed information about the same disk in the
disk group as you did in step X using the vxdi sk 1 i s t device_tag
command. Note the change in the hostid field in the output.

12 Remove all of the volumes in the namedg disk group.

Lab 5: Making Basic Configuration

Changes

Copyright

i;

2006 svmemec Corporation. All rigl1l5 reserved

A-35

Preparation for Defragmenting

a Veritas File System Lab

A lab exercise in the next lesson requires that you run a script that sets up files with
different size extents. Because the script can take a long time to run, you may want
to begin running the script now, so that the necessary environment is created by the
next lab time.
Identify

the device tag for the second internal disk on your lab system. If you

do not have a second internal disk or if you cannot use the second internal disk,
use one of the external disks allocated to you.
Second internal disk (or the external disk used in this lab):

Initialize the second internal disk (or the external disk used in this lab) using a
non-CDS disk format.

Create a non-cds disk group called testdg using the disk you initialized
step 2.

In the tes tdg disk group create a I-CiB conca ten atcd volume called
testvol initializing the volume space with zeros using the .i n i t.e z e r o
option to vxassist.

Create a VxFS file system on testvol

Ask your instructor for the location of the exten t s . sh script. Run the
extents. sh script.

in

and mount it on Ifs_test.

Note: This script can take about 15 minutes to run.


7

Verify that the VRTSsptsoftware is already installed on your system. If not.


ask your instructor for the location of the software and install it.
Note: Before Storage Foundation 5.0. the VRTSsptsoftware was provided as
a separate support utility that needed to be installed by the user. With 5.0. this
software is installed as part of the product installation.

Ensure that the directory

where the vxbench command is located is included

in your PATH definition.

A-36

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyngh!r~:200[; SvmantecCornorauon AlIlIgh\<;reserved

,S)l1mme(.

Lab 6
Lab 6: Administering

File Systems

In this lab, you practice file system


administration, including defragmentation
and administering the file change log.

For Lab Exercises, see Appendix A.


For Lab Solutions, see Appendix B.

Lab 6: Administering

File Systems

In this lab. you practice file system administration. including dcfragmcntauon and
administering the file change log.
The Lah Solurion

fpr

Ihis lab

"Lab (, Solurions: :\rillliniskrlllg

<lr,'

Ipc<lIUI Oil the li)llowing

Fill'

SV'iCJllS."

pagc

I'ag.: Bh7

Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this, you also need four external disks and
the second internal disk to be used during the labs. If you do not have a second
internal disk ur if you cannot use the second internal disk, you need five external
disks to complete the labs.
At the beginning of this lab. you should have a disk group called namedg that has
four external disks and no volumes in it. The second internal disk should be empty
and unused.
Note: If you are working in a North American Mobile Academy lab environment.
you cannot use the second internal disk during the labs. If thai is the case. select
one of the external disks 10 complete the lab steps.

Lab 6: Administering

A-37

File Systems
Copyright

(f

200G

Symantec

Corporation

All rights

reserved

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference. you may record the information here, or refer
back to the first lab where you initially documented this information.

Object

Sample Value

My Data Disks:

Soluris: clt#dO
clt#d5

Your Value
-

HP-UX: c4tOdO c4tOd5


AIX: hdisk21hdisk26
l.inux: sda - sdf

Solaris: cOt2dO

2nd lnternal Disk:

HP-UX: c3t15dO
AIX: hdiskl
Linux: hdb
Location of Lah Scripts
(if any):

/student/labs/sf/
sf50

Prefix to be used with


object names

name

VERITAS

A-38
COl'yrll.jhl

,,;: 2006

Svmantec

Storage Foundation
Corporation

An fights

mservec

5.0 for UNIX: Fundamentals

Preparation for Defragmenting a Veritas File System Lab


Note: If you have already performed these steps at the end of the last lab, then you
can skip this section and proceed with Defragmenting a Veritas File System
section.
Identify the device tag for the second internal disk on your lab system. If you
do not have a second internal disk or if you cannot use the second internal disk,
use one of the external disks allocated to you.
Second internal disk (or the external disk used in this lab):
_
2

Initialize the second internal disk (or the external disk used in this lab) using a
non-CDS disk format

Create a non-CDS disk group called testdg


step 2.

In the testdg disk group create a I-GB concatenated volume called


testvol
initializing the volume space with zeros using the ini t=zero
option to vxassist.

Create a VxFS file system on testvol

Ask your instructor for the location of the extents.


extents. sh script.

using the disk you initialized in

and mount It on Ifs

test.

sh script. Run the

Note: This script can take about 15 minutes to run.


7

Verify that the VRTSspt software is already installed on your system. If not.
ask your instructor for the location of the software and install it.
Note: Before Storage Foundation 5.0, the VRTSspt software was provided as
a separate support utility that needed to be installed by the user. With 5.0. this
software is installed as part of the product installation.

Ensure that the directory where the vxbench command is located is included
in your PATH definition.

Lab 6: Administering

File Systems
Copyright <h.2006 Svmantec Corporation. All rights reserved

A-39

Defragmenting

a Veritas File System

The purpose of this section is to examine the structure of a fragmented and an


untragmcntcd tile system and compare the file system's throughput in each case,
The general steps in this exercise arc:
Make and mount a file system
Examine the structure of the new file system for extents allocated
Then examine a fragmented
in the file system

tile system and report the degree of fragmentation

Use a support utility called vxbench


within the fragmented tile system
De-fragment

to measure throughput

to specific files

the tile system. reporting the degree offragmentation

Repeat executing the vxbench utility using identical parameters to measure


throughput tu the same tiles within a relatively un fragmented tile system
Compare the total throughput

before and after the dcfragmcntauon

process

In the namedg disk group create a I-GB concatenated volume called


namevoll.
2

Create a VxFS tile system on namevol1 and mount it on / namel.

Run a fragmentation report on / namel to analyze directory and extent


fragmentation. Is a newly created, empty tile system considered fragmented?
In the report. what percentages indicate a tile system's fragmentation'?

What is a fragmented

I f you were shown the following extent fragmentation


system. what would you conclude?

Directory

total
6

Unmount

tile system"

Fragmentation
Dirs
Total
Searched
Blocks
199185

report about a tile

Report
Immed Immeds Dirs to
Dirs to Add Reduce

85482 115118 5407

/ namel and remove namevoll

5473

Blocks
to
Reduce
5655

in the namedg disk group,

Note: The following steps will use the Ifs_test


tile system to analyze the
impact of fragmentation on the tile system performance, Verify that the
extents.
sh script has completed before you continue with the rest of this lab,

A--40

VERITAS

Storage

Copynqht '.;.2006 Syrnantcc Corporation

Foundation

All rights reserved

5,0 for UNIX: Fundamentals

Run a fragmentation report on Ifs_test to analyze directory and extent


fragmentation. Is If s t est fragmented? Why or why nor' What should be
done"

Use the Is -Te command to display the extent attributes of the tiles in the
Ifs test file system. Note that on the Solaris platform you need to use the
1 s command provided by the VxFS file system software to be able to use the
-e option.

Measure the sequential read throughput to a particular file. for example. an


8MB file on an 8K extent (for example. Ifs_test/test48).
in a
fragmented file system using the vxbench utility and record the results. Use
an 8K sequential I/O size.
Notes:
You need to use the vxbench utility that is appropriate for the platform
you arc working on. for example vxbench_9 on Solaris 9. To identify the
appropriate vxbench command. use the 1 s -1 lopt/VRTSspt/FSI
VxBenchcommand. If this path is not in your PAT" environment
variable. use the fullpath of the command while running the corresponding
vxbench utility.
Remount the file system before running each 110 test.

10 Repeat the same test for an 8Mb file on an 8Mb extent (ILX example. using the

Ifs_test/test58
file). Note that the file system must be remounted
between the tests. Can you explain why"

I fs _test and gather sumlllary statistics after each pass through


the file system. Aller the defragmentation completes, determine if I f s_test
is fragmented' Why or why not?

11 Defragment

Note: The defragmentation can take about 5 minutes to complete.


12 Measure the throughput of the unfragmcnted file system using the vxbench
utility on the same files as you did in steps <) and 10. Is there any change in

throughput'.'
Notes:
You need to use the vxbench utility that is appropriate for the platform
you are working on. for example vxbench_9 on Solaris 9. To idcnti ty the
appropriate vxbench command. use the Is -1 I opt/VRTSsptl Fsi
VxBenchcommand. I f this path is not in your PATH environment
variable. use the fullpaih of the command while running the corresponding
vxbench utility.
The file system must be remounted before each test to clear the read
buffers.
If you have used external shared disks on a disk array used by other
systems for this lab. the performance results may be impacted by the disk

Lab 6: Administering

A-41

File Systems
Copyright ([ 2()06 svmamec Corporation

All rights reserved

array cache and may not provide a valid comparison between a fragmented
and defragmented file system.
13 What is the difference between an un fragmented and a fragmented file system')
14 Is anyone environment more prone to needing dcfragmcntatiou

than another'!

Reading the File Change Log (FCL)


In the namedg
disk group create a new IO-MB volume called namevoll.
Create a YxFS tile system on namevoll and mount II on /fcl test
2

Turn the FCL on for /fcl

test. and ensure that it is on.

Go to the directory that contains the FCL.

Display the superblock for /fcl_test.

How do you know that there have been no changes in the file system yet'?

Add some files to /fcl_test.

Display the superblock for /fcl_test.

How do you know that changes have bcen made to the file system')

Print the number of the FCL.

Then remove one of the tiles you just added.

10 Which files are identified by the inode numbers that are listed in the Create
type'!
11 Unmount the fcl_ test tile system and remove namevoll.
12 The next two lab sections are optional labs on analyzing and dcfragmenting
fragmented tile systems. If you are not planning to carry out the optional labs.
unmount / f s_tes t file system and destroy the tes tdg disk group;
otherwise. skip this step.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

A-42
CcpYflgrl't

2006 Symaotec

Corporation

All Jlgl'1ts reserved

Optional Lab Exercises


The next set of lab exercises is optional and may be performed if YOll have time.
These exercises provide additional practice in defragmenting a file system and
monitoring fragmentation.

Optional Lab: Defragmenting a Veritas File System


This section uses the If s_ t es t file system to analyze the impact of
fragmentation on the performance ota variety otl/O types on files using small and
large extent sizes.
Recreate the fragmented Ifs_test file system using the following steps:
a

Unmount the Ifs

test file system.

Recreate a vxfs file system in the testvol

Mount the file system to Ifs_test.

Ask your instructor for the location of the extents. sh script. Run the
extent s . sh script.

in testdg.

Note: This script can take about 15 minutes to run.


2

Run a series of performance tests for a variety ofI/O types using the vxbench
utility to compare the performance of the tiles with the RKextent size
(/fs_test/test48)
and the ROOOK
extent size (/fs_test/test58)
by
performing the following steps.
Complete the following table when doing the performance tests.

Lab 6: Administering

A-43

File Systems
Copyright

II'; 2006

Symantec

Corporation

Ail rights

reserved.

Test Type

Time (seconds)
Before
llefl'ag

Sequential
reads. XK

Throughput

After Defrag

Before
Defrag

(KB/second)
After Defrag

2.709

.526

2953.22

15202.10

.547

.549

14634.57

14576.20

Random
reads. XK
extent

8.268

6.267

967.54

1276.53

Random
reads. 80()tJK
extent

6.541

6.468

1223.02

1236.91

extent

Sequential
reads. HOOOK
extent

Note: Results can vary depending on the nature of the data and the model of
array used. No performance guarantees are implied by this lab.
3

Ensure that the directory

where the vxbench utility

is located is included in

your PATH definition.

1/0 Test

Sequential

Note: You must unmount and remount the file system Ifs_test before each
step to dear and initialize the butter cache.
To test the 8K extent size:

Random

I/O Test

To test the 8K extent size:

A-44

the

If s_test

Defragment
some time.

tile system. The dcfragmentation

process takes

Repeat the vxbench performance


performance results.

Compare the results of the dcfragmcntcd


system.

When finished comparing the results in the previous step. un mount the
Ifs_test tile system and destroy the testdg disk group.

tests and complete the table with these

tile system with the fragmented

tile

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYligN l' 200!,) Svmanlec Corporation

All

nqhts rr-served

Optional Lab: Additional Defragmenting Practice


In this exercise, you monitor and defragment a file system by using the fsadm
command.
Create a new 2-GB striped volume called narnevoll in narnedgdisk group.
Create a VxFS file system on narnevoll and mount it on I fs_ test.
2

Repeatedly copy a small existing tile system to If s_test using a new target
directory name each time until the target file system is approximately R5
percent full. For example, on the Solaris platform:
for i in 1 2 3
> do
> cp -r lopt
Ifs
test/opt$l
> done
Note: Monitor the file system size using df - k on the Solaris platform and
bdf on the I1P-UX platform, and CTRL-C out of the for loop when the file
system becomes approximately R5 percent full.

Delete all files in the Ifs_test

Check the level of fragmentation in the Ifs_test

Repeat steps 2 and .j using values 4 5 for i in the loop. Fragmentation of


both free space and directories will result.

Repeat step 2 using values 6 7 for i. Then delete all files that are smaller
than 64K to release a reasonable amount of space.

Defragment the file system and display the results. Run fragmentation reports
both before and after the detragmentation and display summary statistics after
each pass. Compare the f sadmreport from step 4 with the final report from
the last pass in this step.

Unmount the If s test file system and remove the narnevoll volume used
in this lab.

Lab 6: Administering

file system over 10 MB in size.

A-45

File Systems
Copyright

tile system.

Sl 2006

Syrnantec

Corporation

All right:'>

reserved

A-46

VERITAS Storage Foundation 5.0 for UNIX Fundamentals


Copvnqet ~

2006

Svmautec Corporation

All (l{JI'I1S reserved

'SY111anttx

Lab 7
Lab 7: Resolving

Hardware Problems

In this lab, you practice recovering from a


variety of hardware failure scenarios, resulting
in disabled disk groups and failed disks.
First you recover a temporarily disabled disk
group, and then you use a set of interactive
lab scripts to investigate and practice
recovery techniques.

For Lab Exercises, see Appendix A.


For Lab Solutions, see Appendix B.

Lab 7: Resolving Hardware Problems


In this lab. you practice recovering from a variety of hardware failure scenarios.
resulting in disabled disk groups and failed disks. First you recover a temporarily
disabled disk group and then you use a set of interactive lab scripts to investigate
and practice recovery techniques. Each interactive lab script:
Sets up the required volumes
Simulates and describes a failure scenario
Prompts you to fix the problem
Finally. a set of optional labs are provided to enable you to investigate disk failures
further and to understand the behavior of spare disks and hot relocation.
The l.ab Solutio11' ror

"Lab

"7 SOillli,)!lS:

Jdb arc Incat,'"

1111'0

j(,',ol,

,11\

IlL' 1<"lpI\ing

in~ I Lm!\\;ll,' I'rohlcmv."

p;ti!\'

page.

B-~)

Prerequisite Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this. you also need four external disks to be
used during the labs.
At the beginning of this lab. you should have a disk group called namedg that has
four external disks and no volumes in it.

A-47

Lab 7: Resolving Hardware Problems


Copyright

It, 2006

Symantec

Corporation.

All rights

reservec

Classroom Lab Values


In preparation for this lab, you will need the following information about your lah
environment. For your reference, you may record the intormation here, or refer
back to the first lab where you initially documented this information.

Sample Value

f--c0bject
l\Iy Data Disks:

You,' Value

Solaris: c 1 t #dO clt#d5

I-II'-UX: c4tOdO

c4tOd5
AIX: hdisk21hdisk26
l.inux: sda

Location of Lab Scripts:

/student/labs!sf/
sf50

Prefix to be used with


object names

name

sdf

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

A-48
Copyrl(.lht";:

L006

Symantec

Corporation

All rights

reserved

Recovering a Temporarily Disabled Disk Group


Remme all disks except lor one (namedgOl) from the namedg disk group.
2

Create a I g volume called namevoll

Create a file system on namevoll

Copy the contents of / etc/ defaul


contents of the file system.

Ask your instructor for the location ofthe faildg_temp


location here:
Script location:
_

Start writing to a file in the /namel file system at the background using the
following command:
dd if=/dev/zero
count=500000

in namedg disk group.

and mount it to / namel.


t

directory to / name1 and display the

of=/namel/testfile

script. and note the

bs=1024 \

&

In one terminal change to the directory containing the script and before the I/O
completes, execute faildg_temp
namedg command.
Notes:
The faildg_temp
script disables the single path to the disk in the disk
group to simulate a hardware failure. This is just a simulation and not a real
failure; therefore, the operating system will still be able to see the disk after
the failure. The script waits until you arc ready with analyzing the failure,
to re-enable the path to the disk in the disk group.
If the 110 you started in step 6 completes before you can simulate the
failure, you can start it again to observe the I/O failure.

Wait lor the (/0 to fail and in another terminal observe the error displayed in
the system log.

Use the vxdisk


-0 alldgs
list
and vxdg list
determine the status of the disk group, and the disk.

commands to

10 What happened to the file system')


11 When you are done with analyzing the impact of the failure, change to the
terminal where the faildg_temp
script is waiting and enter "e" to correct

the temporary failure.

Lab 7: Resolving

Hardware

A-49

Problems
Copyright e:

2006

Svmentec

Corporation

All rights

reserved

Note: In a real failure scenario. after the hardware recovery, you would need to
first verify that the operating system can see the disks and then verify that
Volume Manager has detected the change in status. I f not, you can force
VxVM to scan the disk by executing the vxdctl
will not be necessary for this lab.

enable command. This

12 Assuming that the failure was due to a temporary fiber disconnection and that
the data is still intact. recover the disk group and start the volume. Verify the
disk and disk group status using the vxdisk
-0 alldgs
1 is t and vxdg
1 i st commands.
13 Remount the tile system and verify that the contents are still there. Note that
you will need to perform a tile system check before you mount the tile system.
14 Unmount the file system and remove namevoll. At the end of this section
you should be left with a namedg disk group with a single disk and three
initialized disks that arc free to be used in a new disk group.

Preparation for Disk Failure Labs


Overview
The following

sections use an interactive

script to simulate a variety of disk failure

scenarios. Your goal is to recover from the problem as described in each scenario.
Use your knowledge ofVxVM
administration. in addition to the VxVM recovery
tools and concepts described in the lesson. to determine which steps to take to
ensure recovery. After you recover the test volumes, the script verities your
solution and provides you with the result. You succeed when you recover the
volumes without corrupting the data.
For most of the recovery problems, you can use any of the VxVM

interfaces: the

command line interface, the VERITAS Enterprise Administrator (VEA) graphical


user interface, or the vxdiskadm menu interface. Lab solutions are provided for
only one method. If you have questions about recovery using interfaces not
covered in the solutions, see your instructor.

Setup
Due to the way in which the lab scripts work, it is important
environment

to set up your

as described in this setup section:

If your system is set to use enclosure-based naming, then you must turn 011'
enclosure-based naming before running thelab scripts.
2

A-50

Create a disk group named tes tdg and add three disks (preferably of the
same size) to the disk group. Assign the following disk media names to the
disks: testdgOl, testdg02, and testdg03.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

Note: If you do not have enough disks. you can destroy disk groups created in
other labs (fur example. namedg)in order to create the testdg disk group.
3

Before running the automated lab scripts. set the DGenvironment variable in
your root profile to the name of the test disk group that you are using:
Rerun your profile by logging out and logging back on. or manually running it.

Ask your instructor fur the location of the lab scripts.

Note: This lab can only be performed on Solaris. HP-UX. and Linux.

Recovering from Temporary Disk Failure


In this lab exercise. a temporary disk failure is simulated. Your goal is to recover
all of the redundant and nonredundant volumes that were on the failed drive, The
lab script run _di sks sets up the test volume configuration. simulates a disk
failure. and validates your solution for recovering the volumes. Ask your instructor
for the location of the run_disks script.
Before You Begin: Ensure that the environment variable DG is set to the name of
the testdg disk group. For example:
DG="testdg"
export DG

From the directory that contains the lab scripts. run the script run_disks. and
select option I. "Turned on' drive (temporary failure)":
This script sets up two volumes:
testl

with a mirrored layout

test2

with a concatenated layout

Note: I I'you receive an error message about the / image file system becoming
full during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. The script simulates a disk
power-off by saving and overwriting the private region on the drive that is used
by both volumes. Then. when you are ready to power the disk back on. the
script restores the private region as it was before the failure.

Assume that the failure was temporary. In a second terminal window. attempt
to recover the volumes.

After you recover the volumes, type e in the lab script window. The script
veri fics whether your solution is correct.

Lab 7: Resolving

Hardware

A-51

Problems
Cop~rlght

'E: 2006

Symanter;

Corporauon

All rights

reserved

Recovering from Permanent Disk Failure


In this lab exercise, a permanent disk failure is simulated. Your goal is to replace
the failed drive and recover the volumes as needed. The lab script run_disks
sets lip the test volume configuration. simulates a disk failure, and validates your
solution for recovering the volumes. Ask your instructor for the location of the
run_disks script.
Before You Begin: Check to ensure that the environment
name otthe testdg disk group:

variable DG is set to the

If DG is not set, set it before you continue:

DG="testdg"
export
DG
From the directory that contains the lab scripts, run the script run_disks, and
select option 2, "Power failed drive (permanent failure)":

This script scts up two volumes:


testl

with a mirrored

layout

test2 with a concatenated layout


Note: If you receive an error message about the / imagefile system becoming
full during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. The script simulates a disk
powcr-ortby
saving and overwriting the private region on the drive that is used
by both volumes. The disk is detached by VxVM.

In a second terminal window, replace the permanently failed drive with either a
new disk at the same SCSI location or by another disk at another SCSI
location. Then. recover the volumes.

After you recover the volumes, type e in the lab script window.

The script

verifies whether your solution is correct.

When you have completed this exercise. if the disk device that was originally
used during disk failure simulation is in onl ine inval id state, rcinitialize
the disk to prepare for later labs.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

A--52
Copynyll\

~ 2l1Uh Svmantec

Corporation.

All fights

r.:,,>crvcd

Recovering from Intermittent Disk Failure (1)


In this lab exercise. intermittent disk failures are simulated. but the system is still
OK. Your goal is to move data from the failing drive and remove the failing disk.
The lab script run_disks sets up the test volume configuration and validates
your solution for resolving the problem. Ask your instructor for the location of the
run_disks script.
Before You Begin:Check to ensure that the environment variable
name of the testdg disk group:

DG

is set to the

I f it is not set. set it before you continue:


DG="testdg"
export
DG

From the directory that contains the lab scripts. run the script run_disks. and
select option 3. "Intermittent Failures (system still ok)":
This script sets up two volumes:
testl

with a mirrored layout

test2 with a concatenated layout

Note: If you receive an error message about the / image tile system becoming
full during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. You arc informed that the disk
drive used by both volumes is experiencing intermittent failures that must be
addressed.

In a second terminal window. move the data on the failing disk to another disk.
and remove the failing disk.

After you resolve the problem. type e in the lab script window. The script
verities whether your solution is correct.

When you have completed this exercise. add the disk you removed from the
disk group back to the testdg disk group so that you can use it in later labs.

Lab 7: Resolving

Hardware

A-53

Problems
Copyriglll

'0 2006 Svmantec Corporation.

All rigtHS

reserved

Optional Lab Exercises


The next set of lab exercises is optional and may he performed if you have time.
These exercises provide additional recovery scenarios, as well as practice in
replacing physical drives and working with spare disks. A final activity explores
how to use the Support website, which is an excellent troubleshooting resource.
Optional Lab: Recovering from Intermittent

Disk Failure (2)

In this optional lab exercise, intermittent disk failures are simulated, and the
system has slowed down significantly, so that it is not possible to evacuate data
from the failing disk. The lab script run_disks sets up the test volume
configuration and validates your solution for resolving the problem. Ask your
instructor for the location of the run _di sks script.
Before You Begin: Check to ensure that the environment variable DG is set to the
name of the testdg disk group:
IfDG is not set, set it before you continue:
DG="testdg"
export
DG
From the directory that contains the lab scripts, run the script run_disks, and
select option 4. "Intcrmirtcnt Failures (system too slow)":
This script sets up two volumes:
tes t 1 with a mirrored layout
tes t2 with a concatenated layout

Note: If you receive an error message about the / image tile system becoming
full during \ olume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. You are informed that:
The disk drive used by both volumes is experiencing intermittent failures
that need to be addressed immediately.
The system has slowed down significantly,
the disk before removing it.

A-54

so it is not possible to evacuate

In a second terminal window. perform the necessaryactions to resolve the


problem.

After you resolve the problem. type e in the lab script window. The script
verities whether your solution is correct.

VERITAS

Storage Foundation

Copyright ~,20() Syr'lilntec CnrpOfiltloP All nqnts reserved

5.0 for UNIX: Fundamentals

Optional Lab: Recovering from Temporary Disk Failure - Layered


Volume
In this optional lab exercise. a temporary disk failure is simulated. Your goal is to
recover all of the volumes that were on the failed drive. The lab script run_disks
sets up the test volume configuration and validates your solution for resolving the
problem. Ask your instructor for the location of the run_disks script.
Before You Begin: Check to ensure that the environment variable DG is set to the
name of the testdg disk group:
If DG is not set. set it before you continue:
DG="testdg"
export
DG

From the directory that contains the lab scripts. run the script run_disks. and
select option 5. "Turned off drive with layered volume":
This script sets up two volumes:
tes t 1 with a concat-mirror layout
test2

with a concatenated layout

Note: I I'you receive an error message about the / image file system becoming
full during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. The script simulates a disk
power-offby saving and overwriting the private region on the drive that is used
by both volumes. Then. when you arc ready to power the disk back on. the
script restores the private region as it was before the failure.

Assume that the failure was temporary. In a second terminal window, attempt
to recover the volumes.

Arter you recover the volumes. type e in the lab script window. The script
veri lies whether your solution is correct.

Lab 7: Resolving

Hardware

A-55

Problems
Copyright

K 2006

Svmantec

Corporancn

All rights

reserved

Optional Lab: Recovering from Permanent Disk Failure> Layered


Volume
In this optional lab exercise, a permanent disk failure is simulated. Your goal is to
replace the failed drive and recover the volumes as needed. The lab script
run_disks sets up the test volume configuration and validates your solution for
resolving the problem. Ask your instructor for the location of the run disks
script.
Before Yuu Begin: Check to ensure that the environment variable
name of the testdg disk group:

DG

is set to the

If DG is not set, set it before you continue:


DG="testdg"
export
DG
From the directory that contains the lab scripts, run the script run _di sk s, and
select option 6. "Power failed drive with layered volume":
This script sets up two volumes:
testl

with a concat-mirror

layout

test2 with a concatenated layout

Note: If you receive an error message about the / image tile system becoming
full during volume setup, ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. The script simulates a disk
power-offby saving and overwriting the private region on the drive that is used
by both volumes. The disk is detached by YxYM.

In a second terminal window, replace the permanently failed drive with either a
new disk at the same SCSI location or by another disk at another SCSI
location. Then. recover the volumes.

The rest of this lab exercise includes optional lab instructions where you perform a
variety of basic recovery operations.

A-56

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

Optional Lab: Removing a Disk from VxVM Control


Destroy the testdg disk group and add the three disks back to the namedg
disk group, At this point you should have one disk group called namedg with
four empty disks in it. There should be no volumes in the namedgdisk group,
I I'you had destroyed the namedgdisk group in previous lab sections. re-create
it.
2

In the namedgdisk group create a I OO-MB.mirrored volume named


namevoll. Create a Veritas file system on namevol1 and mount it to
/ name1directory,

Display the properties of the volume, In the table. record the device and disk
media name of the disks used in this volume,

Remove one of the disks that is being used by the volume for replacement.

Confirm that the disk was removed,

From the command line. check that the state of one of the plexcs is
and REMOVED,

[I' you are not already logged in YEA. start YEA and connect to your local
system. Check the status of the disk that has been removed.

Replace the disk back into the namedgdisk group.

Check the status of the disks. What is the status of the replaced disk')

10 Display volume information.

DISABLED

What is the state of the plexes of namevol 1';

11 In YEA. what is the status of the replaced disk') What is the status of the

volume')
12 From the command line. recover the volume. During and after recovery. check

the status of the plex in another command window and in YEA,

Lab 7: Resotving

Hardware

Problems
Copyright

A-57
t:

2006

Symaotec

Corporanon

All rights

reserved

Optional Lab: Replacing Physical Drives (Without Hot Relocation)


Note: If you have skipped the previous optional lab section called Removing a
Disk From VxVM Control. you may need to destroy testdg and add the three
disks back to the narnedgdisk group before you start this section. If you had
destroyed the narnedgdisk group in previous lab sections, re-create it.
Ensure that the narnedgdisk group has a mirrored volume called narnevoll
with a Veritas file system mounted on / narne1.I f not, create a 100-I\IB
mirrored volume called narnevoll in the narnedgdisk group, add a VxFS
file system to the volume, and mount the tile system at the mount point

/narne1.
2

If the vxrelocd daemon is running, stop it using ps and kill, in order to


stop hot relocation from taking place. Verify that the vxrelocd processes are
killed before you continue.
Notes:
If you have executed the run_di sks script in the previous lab sections,
the vxrelocd daemon may already be killed.
There are two vxre locd processes on the Solaris platform.
both of them at the same time.

You must kill

Next, simulate disk failure by writing over the private region using the
overwritepr script followed by vxdctl disable and vxdctl
enable commands. Ask your instructor for the location of the script.
While using the script, substitute the appropriate disk device name for one of
the disks in use by narnevoll, lor example on Linux use sbd, on Solaris and
HP-UX

use cltBdO.

When the error occurs, view the status of the disks from the command line.

View the status of the volume from the command line.

In V EA, what is the status of the disks and volume?


Note: 011 the HP-UX platform, the vxdctl
di sable command may cause
the StorageAgent used by the VEA GUI to hang. If this happens, the VEA (iUI
docs not detect the changes. Use the following command to restart the agent:

/opt/VRTSobc/pa133/bin/vxpalctrl
-c restart
7

A-58

-a StorageAgent \

Rcscan lor all attached disks:

VERITAS

Storage

Foundation

Copyngt'l~' ;W06 Svrnamec Corporation All fights reserved

5.0 for UNIX

Fundamentals

Recover the disk by replacing the private and public regions on the disk. In the
command. substitute the appropriate disk device name. lor example on Linux,
use sbd:
Note: This step is only necessary when you replace the failed disk with a brand
new one. If it were a temporary failure. this step would not be necessary.

Bring the disk back under VxVM

control:

10 Check the status of the disks and the volume,


11 From the command line. recover the volume.

12 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered.
13 Unmount the / namel

tile system and remove the namevoll

volume.

Optional Lab: Exploring Spare Disk Behavior


Note: If you have not already done so. destroy testdg
and add the three disks
back to the namedg disk group before you start this section.
You should have four disks (namedgO 1 through namedg04)
namedg. Set all disks to have the spare flag on.
2

in the disk group

Create a I OO-MB mirrored volume called sparevol.


Is the volume successfully created') Why or why not?

Attempt to create the same volume again. but this time specify two disks to
use. Do not clear any spare flags on the disks.

Remove the sparevol

Verify that the relocation daemon (vxrelocd)


follows:

Remove the spare

Create a I OO-MB concatenated mirrored volume called spare2vol.

Save the output ofvxprint

Lab 7: Resolving

Hardware

volume.
is running. Ifnot.

start it as

flags from three of the four disks.

-g

namedg

-thr

Problems
Copyright,

to a file.

A-59
2006

Svmantec

Corporation

All rights

reserved

Display the properties of the spare2vol volume. In the table, record the
device and disk media name of the disks used in this volume. You are going to
simulate disk failure on one of the disks. Decide which disk you are going to
fail.

10 Next, simulate disk failure by writing over the private region using the
overwri tepr script followed by vxdctl
disable and vxdctl
enable commands. Ask your instructor for the location of the script.

While using the script, substitute the appropriate disk device name for one of
the disks in use by spare2voL for example on Linux use sbd, on Solaris
and HP-UX use clt8dO.

11 Run vxprint

-g namedg - rth and compare the output to the vxprint


output that you saved earlier. What has occurred'?

Notc: You may need to wait a minute or two for the hot relocation to complete.
12 In YEA, view the disks. Notice that the disk is in the disconnected state.

Notc: On the HP-UX platform. the vxdct 1 disable command may cause
the StorageAgent used by the YEA GUI to hang. If this happens, the YEA GUI
does not detect the changes. Use the following command to restart the agent:
/opt/VRTSobc/pa133/bin/vxpalctrl
-c restart
13 Run vxdisk

-0

alldgs

list.

-a StorageAgent

What do you notice'?

14 Rcscan for all attached disks.


15 In YEA. view the status of the disks and the volume.
16 Recover the disk by replacing the private and public regions on the disk. In the

command, substitute the appropriate disk device name, for example on Linux,
use sbd:
17 Bring the disk back under Yx YM control and into the disk group.
18 In YEA, undo hot relocation for the disk.
19 Wait until the volume is fully recovered before continuing.

Check to ensure

that the disk and the volume are fully recovered.


20 Remove the spare2vol

volume.

VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals

A-60
Copvnqht

2006 svroeotec

Corporation

All

fights

reserved

Optional Lab: Using the Support Web Site


Access the latest information on VERITAS Storage Foundation.
Note: If you are working in the Virtual Academy lab environment. you may
not be able to access the Veritas Technical Support web site, because the DNS
configuration was changed during software installation by the prepare_ns
script. To restore the original DNS configuration. change to the directory
containing the lab scripts, execute the restore_ns script and try to access
the web site again.
2

What is the VERlTAS Support mission statement"


Hint: It is in the Support Handbook (page 3).

How many on-site support visits are included in a Extended Support contract"
l low about with a Business Critical Support"
Hint: In the Support l lundbook. see table on page 4 and explanation on page 5.

Which AIX platform is supported for Storage Foundation 5,()"

Access a recent Hardware Compatibility List for Storage Foundation. Which


Brocade switches are supported by VERITAS Storage Foundation and lligh
Availability Solutions 5.0 on Solaris?

Where would you locate the Patch with Maintenance Pack I for VERITAS
Storage Solutions and Cluster File Solutions 4.0 IlH Solaris

Perform this step only if you are working in the Virtual Academy lab
environment. If you have executed the restore_ns script to restore the
name resolution configuration at the beginning of this lab section in step I,
change to the directory containing the lab scripts and execute the
prepare_ns script before you continue.
If necessary:
cd

/script_location

,/prepare

Lab 7: Resotving

Hardware

ns

Problems
Copyright,S: 2006 Svmantec Corporation

A-51
All rights rescrvec

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

A-62
CGpynght

'f:

2(JI)6

Svmar-tec Cnrporanon

All fights

reserved

Appendix B
Lab Solutions

8-2

VERITAS Storage Foundation


Copvnqtu ,:

2006

Svmantoc

Corporation.

All rights

reserved

5.0 for UNIX: Fundamentals

symantee_

Lab 1
Lab 1: Introducing the Lab Environment
In this lab, you are introduced to the lab
environment, system, and disks that you will use
throughout this course.

For Lab Exercises, see Append~


For Lab Solutions, see Append~--J

Lab 1 Solutions: Introducing the Lab Environment


In this lab. you are introduced to the lab environment. the systems. and disks thai
you will use throughout this course. You II ill also record some prerequisite
information that will prepare you for the installation ofVERITAS Storage
Foundation and the labs that follow throughout this course.
The Lab Exercises for this lab are located on the following page:

Lab 1 Solutions:

Introducing

8-3

the Lab Environment


Copyright

~") 2006

Svnantoc

Corporation

All rights

reserved

Lab Environment Introduction


The instructor will describe the classroom environment, review the configuration
and layout of the systems, and assign disks for you to use. The content olthis
activity depends on the type of classroom, hardware, and the operating system:s)
deployed.

Lab Prerequisites
Record the following

information

to be provided by your instructor:

Ohject

Sample Value

root password

veritas

Host name

trainl

Domain name

classrooml.int

Fully qualified hostname


(FQHI\)

trainl.classrooml
.int

Host name of the system


sharing disks with my
system
(my partner system)

train2

'Iy Root I)isk:

Solaris: cOtOdO

Your Value

HP-UX: clt15dO
AIX:hdiskO
Linux: hda
~~ternal

Disk:

Solaris: cOt2dO
HP-UX: c3t 15dO
AIX: hdiskl
Linux: hdb
Solaris: c 1t #dO clt#d5

'Iy Data Disks:

HJ>-UX: c4tOdO
c4tOd5

AIX:hdisk21hdisk26
l.inux: sda

8-4

sdf

VERITAS Storage Foundation


Cccynqnt

(:. 2U06

S),Hl31l1CC

Corporation

All Pyt1tS roservec

5.0 for UNIX: Fundamentals

Location of Storage
Foundation 5.0
Software:

/student/
software/sf/sf50

Location of Lab Scripts:

/student/labs/sf/
sf50

Location of the fp
program:

/student/labs/sf/
sf50/bin

Location or V F: RITAS
Storage Foundation
license keys:

/student/
software/license/
sf50 - entr
lic.txt

Lab 1 Solutions:

Introducing

the Lab Environment


Copyright ~ 2006 Symantec Corporation All rights reserved

B-5

Instructor Classroom Setup


Perform the following steps to enable zoning configurations
Foundation 5-day course (not required for High Availability
1

Use course_setup

for the Storage


Fundamentals):

Select Classroom.

script:

(Setup scripts are all included in Classroom SAN configuration


Select

Function

Select

Zoning

by Zone Name

2 - Select
Zoning and Hostgroup
Configuration
3 - Select/Check
Hostgroup
Configuration
2

Select option 3 - Select/Check

by Course

HDS Hostgroup

Select option 2 - DMI'

and 2

Configuration

Mode. Wait and do not respond to prompts.

Exit to first level menu.

Select option
Select

1 path

access

3 - Check active
3

Name

Hostgroup Configuration.

Select
BostGroup
Configuration
to be Configured:
1 - Standard
Mode: 2 or 4 node sharing,
No DMP
2
DMP Mode: 2 node sh~ring,
switch~ble
between
pa th

Version 2).

To Perform:

I - Select Zoning by Zonc Name,

Zoning

Configuration

Required:

1 - Mode 1: 6 sets of 2 Systems sharing


12 LUNs, no Tape Library
available
(HDS DMP Mode - 6 x 2 Systems - Single
Path to 12
LUNs)
2 - Mode 2: 3 sets of 4 Systems sharing
24 LUNs, no Tape L.ibrary
available
(HDS DMP Mode
6 x 2 Systems - Dual Paths to 12 LUNs)
I - Mode I (single path to 12 LUNs).

Select option

Select option 4 - Solaris as the OS.

Exit out ofthe

Reboot each system using reboot

course_setup

B-6

script.

VERITAS
Copyn!-)ht

~ 20()6

Svrnantec

- - - 1".

Storage Foundation
copo-auoo. AlillyhlS

reserved

5.0 for UNIX: Fundamentals

symanter

Lab 2
Lab 2: Installation and Interfaces
In this lab, you install VERITAS Storage
Foundation 5.0 on your lab system. You also
explore the Storage Foundation user interfaces,
including the VERITAS Enterprise Administrator
interface, the vxdiskadm menu interface, and the
command-line interface.

For Lab Exercises,


Fo.! Lab Solutions,

Lab 2 Solutions:

Installation

see Appendix
see Appendix

A.
B.

and Interfaces

In this exercise. you install V[RITAS Storage Foundation 5.0 on your lab system.
You also explore the VxVM user interfaces. including the V[RITAS Enterprise
Administrator interface. the vxdiskadm menu interface. and the command line
interface.
The Lab Exercises for this lab are located on the following page:

Prerequisite Setup
To perform this lab. you need a lab system with the appropriate operating system
and patch sets pre-installed. At this point there should be no Storage Foundation
software installed on the lab system. The lab steps assume that the system has
access to the Storage Foundation 5.0 software and that you have a Storage
Foundation 5.0 Enterprise demo license key that can be used during installation.

Lab 2 Solutions

Installation

and Interfaces
COp)'rigt11 r,;:.2006 Syrnaruec Corporation. All righls reserved

B-7

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference, you may record the information here, or refer
back to the first lab where you initially documented this information.

Object

Sample Value

root password

veritas

Host name

trainl

Domain name

classrooml,int

Fully qualified hostname


(FQIII\)

trainl,classrooml
,int

\1y Boot Disk:

Solaris: cOtOdO

Your Value

HP-UX: clt15dO
AIX: hdiskO

Linux: hda

B-8

Location of Storage
Foundation 5,0
Software:

/student/
soft ware/

Location ofVERITAS
Storage Foundation
license keys:

/student/
software/license/
sf 50 entr - lic,txt

Location of Lah Scripts:

/student/labs/sf/
sf50

sf / sf50

VERITAS

Storage

Copyngt11 G 2006 Svroaruec COfpOr(lIW'" Ill!

Foundation
"llnts

reserved

5.0 for UNIX

Fundamentals

Preinstallation
Determine if there are any

VRTS

or

SYMC

packages currently installed on your

system.
Solaris

I
I

pkginfo
pkginfo

HP-lJX

grep

-i

VRTS

grep

-i

SYMC

swlist

-1 product

swlist

-1 product

I
I

grep

VRTS

grep

SYMC

Note: If you have chosen to install the VxVM bundle that


comes with the IIiv2 operating system software. you will see
versions 3.) ofVERITAS Volume Manager software including
the VEA packages.
f--

AIX

lslpp
lslpp

Linux

-1 'VRTS* '
-1 'SYMC*'

rpm

-qa

rpm

-qa

I
I

grep

VRTS

grep

SYMC

Before installing Storage Foundation. save the following important system


files into hack up files named with a '". preVM" extension. Also. save your
hoot disk information to a file for later use (do not store the file in /tmp). You
may need the boot disk information when you bring the boot disk under VxVM
control in a later lab.
Solaris

ep fete/system

/ete/system.preVM

ep /ete/vfstab

/ete/vfstab.preVM

prtvtoe
/dev/rdsk/boot
/ete/bootdisk.preVM
IW-lJX

ep /stand/system
ep /ete/fstab

AIX

Linux

device

name > \

/stand/system.preVM

/ete/filesystems.preVM

/ete/vfs.preVM

ep /ete/grub.eonf
ep /ete/modules.eonf

disk

/ete/fstab.preVM

ep /ete/filesystems
ep /ete/vfs

/ete/grub.eonf.preVM
/ete/modules.eonf.preVM

Are any VERlTAS license keys installed on your system'? Check for installed
licenses.
vxlierep

Note: The vxlicrep


point.

Lab 2 Solutions:

Installation

utility may not be available on your system at this

and Interfaces
Copyligl1t r; 2006 Symantec Corporation All rigtlls reserved

8-9

To test if DNS is configured in your environment, check if nslookup


resolves the hostnamc to a fully qualified hostname by typing nslookup
hostname. If there is no DNS or if the host name cannot be resolved to a fully
qualified hostname. carry out the following steps:
a

Ensure that the fully qualified hostname is listed in the


tile. For example,

jete/hosts
cat

jete/hosts

192.168.xxx.yyy

train#.domain

train#

where domain is the domain name used in the classroom, such as

elassrooml. into
If the fully qualified
alias to hostname.
b

hosinamc is not in the jete/hosts

Change to the directory

containing

tile. add it as an

lab scripts and execute the

prepare _ns script. This script ensures that your lab system only uses
local files for name resolution.

cd /location_of_lab_scripts
./prepare

8-10

ns

VERITAS

Storage Foundation

Copynghl ,,~.2006 Symantec Corporation

All

nqhb reserved

5.0 for UNIX: Fundamentals

Installing VERITAS Storage Foundation


Navigate to the directory containing the Vcritas Storage Foundation software.
Ask your instructor for the location of the installer script. Using the
installer
script. run a precheck to determine if your system meets all
preinstallation requirements. I f any requirements (other than the license
software not being installed) are not met. follow the instructions to take any
required actions before you continue, Note that you can look into the log file
created to see the details of the checks the script performs.
cd /software_location
./installer
-precheck
system
where system is the hostname of your lab system.
Select the option number for Vcrltas Storage Foundation when prompted.
2

Navigate to the directory containing the Veriras Storage Foundation software.


Install and perform initial configuration of Storage Foundation (VxVM and
VxFS) using the following steps:
a Start the installer script.
cd /software_location
./installer
b

Select

Select the Veritas Storage Foundation software to install.

for Install/Upgrade a Product option.

On the I IP-UX platform. confirm that you wish to continue the


installation of this version.

Enter the name of your system when prompted.

Obtain a license key from your instructor and record it here. Type the
license key when prompted.
License Key:
Enter n when you are asked if you want to enter another license key.
Select to install All Veritas Storage Foundation packages when prompted.

Lab 2 Solutions:

Press Return to scroll through the list of packages.

Installation

and Interfaces
Copynqhtt'. 2006 Symantec Cnrporatioe All rights reserved

8-11

Accept the default ory to configure SF.

HP-liX

On the HP-UX plauonn. the installer


script starts the software
installation without asking any configuration questions. When the
softwure installation is complete. it prompts you to reboot your
system. Continue with the eonliguration using. / installer
- con figure
utter the system is rebooted.
cd

shutdown
Alter

cd

-ry

now

reboot:
/software_location

./installer

-configure

system

Do not set up enclosure-based naming for Volume Manager.


Do not set up a default disk group.
kif

an error message is displayed that the tully-qualified


be queried, press return to continue.

host name could not

Obtain the domain name from your instructor and type the fully qualified
host name of your system when prompted. For example:
train5.classrooml.int
m Do not enable Storage Foundation Management Server Management. The
system will be a standalone host.
n

Select y to start Storage Foundation processes.

Wait while the installation proceeds and processes arc started.

When the installation script completes, you will be asked to reboot your
system. Perform the next lab step (lab step 3) to modify the root profile
before rebooting your system.

This step is only for North American Mobile Academy lab environment. If
you arc working ill a different lab environment, skip this step.
If you are working in a North American Mobile Academy lab environment
with iSCSI disk devices, change to the directory containing the lab scripts
and execute the iSCSl setup lab script. TIllS SCript disables DMP
support fur iSCSI disks so that they can be recognized correctly by Volume
Manager.
Only if you arc working in a North American
environment:

Mobile Academy lab

cd /location_of_lab_scripts

8-12

VERITAS Storage Foundation


COPYlIgtlt .~ 2006 Symanrec Corporation

All

nqnts reserved

5.0 for UNIX: Fundamentals

./iscsi
3

setup

Check in /. profile

to ensure that the following

paths are present.

Note: Your lab systems may already be configured with these environment
variable settings. l lowever, in a real-li fe environment you would need to carry
out this step yourself
Solaris.
AIX

PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSob/bin:
/usr/sbin:/opt/VRTS/bin:/opt/VRTSvxfs/sbin
MANPATH=$MANPATH:/opt/VRTS/man
export

Linux

PATH MANPATH

PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSob/bin:
/usr/sbin:/opt/VRTS/bin:/opt/VRTSvxfs/sbin
MANPATH=$MANPATH:/opt/VRTS/man
PATH MANPATH

export

MANSECT=$MANSECT:lm
IWUX

MANSECT

PATH

export

; export

PATH=/usr/lib/vxvm/bin:/opt/VRTSob/bin:
/opt/VRTS/bin:/usr/sbin:
\
/usr/lbin/fs/vxfs4.1:$PATH

Reboot your system.

-y -i6

Solaris

shutdown

HI'LJX

No need to reboot your system becauseit has already been rebooted


after the package installation.

-gO

Lab 2 Solutions Installation and Interfaces


C0pyright

ih

2006 svrneotec Corporation AI! rights

8-13
reserved

Setting Up VERITAS Enterprise Administrator


Is the YEA server running'! lfnot, start it.

-m (to confirm that the server is running)


(if the SenTI" is not already running)

Solar-is,

vxsvc

HI'-UX,
AIX

vxsvc

Linux

vxsvcctrl

status

vxsvcctrl

start

(to confirm that the server is running)


(if the server is not already running)

Start the YEA graphical user interface.


&
Note: On some systems, you may need to configure the system to use the
appropriate display. For example, if the display is pel: O.before you run
YEA, typc:

vea

DISPLAY=pcl:0
export
DISPLAY
It is also important that the display itself is configured to accept connections
from your client. If you receive permission errors when you try to start YEA,
in a terminal window on the display system, type:
xhost

system or xhost

where system is thc hostname of the client on which you are running the
vea command.
3

In the Select Profile window. click Manage Profiles button and configure YEA
to always start with the Default profile.
Set "Start VEA using profile"
OK to continue.

option to Default and click Close, then click

Click "Connect to a Host or Domain link" and connect to your system as


root. Your instructor provides you with the password.
Hostname: (For example, train13)
lJsername: root
Password: (Your instructor

provides the password.)

On the left pane (object tree) view. drill down the system and observe the
various categories of V x YM objects.

Select the Assistant perspective on the quick access bar and view tasks for

systemname/StorageAgent.
7

Using the System perspective find out what disks are available to the OS.

8-14

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynght

l;;

200fi

Svmantec

Corporanon

All rights

reserved

In the System perspective object tree, expand your host and the
StorageAgent, then select the Disks node. Examine the Device column in
the grid.
8

Execute the Disk Scan command and observe the messages on the console
view. Click on a message to see the details.

In the VEA System perspective object tree, select your host. Select
Actions->Rescan.
9

What commands were executed by the Disk Scan task')

Navigate to the Log perspective. Select the Task Log tab in the right pane
and double-click the "Scan for new disks" task.
10 Exit the VEA graphical interface.

In the VEA main window, select Filc->Exit.

Lab 2 Solutions:

Installation

Confirm when prompted.

8-15

and Interfaces
Copyright'i"

2006

Symantec

Corporation

All lighTs

resorveo

11 Create a root
YEA.
Solaris,

equivalent

administrative

account named adminl

for use of

I. Create a new administrative account named adminl:

Linux

useradd
passwd

adminl
adminl

2. Type a password for adminl.


3 Modify the jete/group
file to add the vrtsadm group and
specify the root and adminl user, by using the vi editor:
vi

/ete/group

4. In the tile. move to the location where you want to insert the
vrt sadmentry. change to insert mode by typing .i , then add the line:
vrtsadm::99:root,adminl
5. When you are finished editing. press [Esc] to leave insert mode.
6. Then. save the tile and quit:
:wq

ur-ux

I. Create a newadministrative account named adminl by using


SAM or command line utilities:
useradd
passwd

adminl
adminl

2. Type a password lor adminl.


3. Add the vrtsadm group and specify the root and adminl users
as members. Use SAM or modify the jete/group
file by using the
v i editor:
vi

/ete/group

4. In the file. move to the location where you want to insert tile
vrtsadm entry. change to insert mode by typing i, then add the
line:
vrtsadm::99:root,adminl
5. When you are finished editing. press [Esc] to leave insert mode.
11. Then, save the tile and quit:
:wq!
AIX

mkgroup
useradd
passwd

-A vrtsadm
-m -G vrtsadm

adminl

adminl

(Type the password.)


12 Test the new account. After you have tested the new account, exit YEA.

Launch VEA:
vea

Select "Connect tu a Host or Domain", and specify the host name:


Hustname: (For example, train13)

8-16

VERITAS

Storage Foundation

5.0 for UNIX: Fundamentals

c
d

Select the "Connect using a different user account" option and click
Connect.
Enter the username and password for the new user:

User: adminl
Password: (Type the password that you created for adminl.)
After confirming the account, select File->Exit.

Exploring vxdiskadm
From the command line. invoke the text-based Vx YM menu interface.
vxdiskadm

Display information about the menu or about specific commands.


Type? at any of the prompts within the interface.

What disks are available to the OS?


Type list

at the main menu, and then type all.

Exit the vxdiskadm interface.


Type q at the prompts until you exit vxdiskadm.

Lab 2 Sotutions:

tnstallation

8-17

and Interfaces
Cnnvnqtu t. 2006 Svrnantec Corporation

All

riqt1f;;

reservort

Optional Lab: Accessing CLI Commands


Note: This exercise introduces several commonly used VxVM commands. These
commands and associated concepts are explained in detail throughout this course.
If you have used Volume Manager before. you may already be familiar with these
comma lids. If you are new to Volume Manager, this exercise aims to show you the
amount of information you can gct from the manual pages. Note that you do not
need to read all of the manual pages for this exercise.
From the command line. invoke the VxVM
vxassist command.

manual pages and read about the

man vxassist
2

What vxassist

command parameter creates a VxVM

The make parameter


3

is used in creating

From the command line. invoke thc VxVM


vxdisk command.

volume?

a volume.
manual pages and read about the

man vxdisk
4

What disks are available to Vx VM"

vxdisk

-0

alldgs

All the available


5

list

disks are displayed

in the list.

From the command line. invoke thc VxVM


vxdg command.

manual pages and read about the

man vxdg
6

How do you list locally imported disk groups'?

vxdg
7

list

From the command line. invoke the VxVM


vxp ri nt; command.

manual pages and read about the

man vxprint

B-18

VERITAS

Storage Foundation

Copyright 'f 20011Svmantec Corporation

All nqhts reserved

5,0 for UNIX

Fundamentals

Optional Lab: More Installation Exploration


When docs the Vx VM license expire"
vx1icrep

more

What is the version and revision number of the installed version ofVx v'M?
Solaris

pkginfo

-1 VRTSvxvm
look at the Version field.

In the output.

ur-ux

--

sw1ist

grep

The version

AIX

lslpp

Linux

rpm

vxvm

is in the second column

ofthe

output.

-1 VRTSvxvm

In the output.

-i

look under the column

named Level.

VRTSvxvm

-qi

Which daemons are running after the system boots under Vx VM control"
Solaris

ps

-eflgrep

-i vx

vxconfigd,
vxrelocd,
vxesd,
vxconfigbackupd,
vxsmf.bin
HI'-lJX

ps

-ef

grep

Installation

vxcached,
vxpal,

vxnotify,

vxcached,
vxfsd,
vxiod,

-i vx

vxconfigd,
vxrelocd.
vxesd,
vxconfigbackupd,
vxpal,
vxsmf.bin

Lab 2 Solutions:

vxnotify,
vxsvc,

vxsvc,

and Interfaces
Copvuqht 'f: 2006 Symantec Corporation All rights roservoc

B-19

8-20

VERITAS Storage Foundation


COPYright

2D06 Swnantec Corporauoo All fI~hlS reserved

5.0 for UNIX: Fundamentals

'S}111illitCC.

Lab 3
Lab 3: Creating a Volume and File System
In this lab, you create new disk groups, simple
volumes, and file systems, mount and
unmount the file systems, and observe the
volume and disk properties.

The first exercise uses the VEA interface. The


second exercise uses the command-line
interface.

For Lab Exercises, see Appendix A.


For Lab Solutions, see Appendix B.

Lab 3 Solutions:

Creating a Volume and File System

In this lab. you create new disk groups. simple volumes. and file systems. mount
and unmount the file systems. and observe the volume and disk properties. The
first exercise uses the VEA interface. The second exercise uses the command line
interface.
The Lab Exercises for this lab are located on the following page:

If you use object names other than the ones provided. substitute the names
accordingly in the commands.
Caution: In this lab. do not include the boot disk in any of the tasks.
Prerequisite Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this. you also need four empty and unused
external disks to be used during the labs.
Note: Although you should not have to perform disk labeling. here are some tips
that may help if your disks are not properly formatted:
On Solaris, use the format command to place a label on any disks that are not
properly labeled for use under Solaris. Ask the instructor for details.
On Linux, if you have problems initializing a disk. you may need to run this
command: fdi sk / dev / di sk.
Use options - 0 and -w to write a new DOS partition table. (The disk may have
previously been used with Solaris.)

Lab 3 Solutions:

Creating

a Volume and File System


Copyngh!"!;) 2006 Symantec Corporauon All rights reservoc

8-21

Classroom Lab Values


In preparation for this lab. you will need the following information about your lab
environment. For your reference, you may record the information here, or refer
back to the first lab where you initially documented this information.

Object

Sample Value

root password

veritas

Host name

trainl

,,~ Data Disks:

Solaris: c 1t #dO -

Prefix to be used with


object names

B-22

~ '-{)

,"

-'l

clt#d5
HP-UX: c4tOdO c4tOd5
AIX:hdisk21hdisk26
Linux: sda - sdf
name

VERITAS
Copyright

Your Value

.; 2006

Svmar-tec

Storage Foundation
Comoranon

AU nqt-ts

resorvoo

5.0 for UNIX: Fundamentals

Creating a Volume and File System: VEA


Run and log on to the YEA interface as the root

user.

vea &
2

View all the disk deviceson the system.What is the statusolthc disks assigned
to you for the labs"
a lising the system perspective (StorageAgent view), drill down the
object tree, and select the Disks node.
b

View the disks in the grid.

Normally the disks should be in Not I nitialized state.


3

Selectan uninitialized disk and initialize it using the YEA, Observethe change
in the Statuscolumn. What is the statusof the disk now
a Select the disk in the grid, and select Actions->Initialize
Disk.
b Verify the selected disk in the Initialize Disk view and click OK.
The status of the disk should change to Free.

Createa new disk group using the disk you initialized in the previous step.
Name the new disk group namedg1. Observethe change in the disk status.
Note: If you are sharing a disk array. make sure that the prefix you are using
for the disk group namesis unique.
a Select the newly initialized disk in the grid, and select Actions->New
Disk Group.
b In the New Disk Group wizard, click Next to skip the Welcome page.
c Type the name of the disk group. Ensure that Enable Cross-platform
Data Sharing (CDS) remains checked. If necessary, make changes to
the selected disks, and click Next.
d

Confirm the disk selection.

Do not select a disk group organization principle when prompted.

Click Finish.
The status ofthe disk should change to Imported and the disk media name
and the disk group name should be visible in the disk grid.
5

Using YEA createa new volume of size I g in namedg 1. Name the new
volume namevol1. Createa file systemon it and make surethat the file
system is mounted at bout time to / name1 directory.
a Select the Volumes node in the object tree and select Actions->New
Volume.
b In the New Volume Wizard, click Next on the welcome page.
c

Lab 3 Solutions:

Select the disk group name and click Next.

Creating

a Volume and File System


Copyright:s 2006 Svmantec corpcrauon. All lights reservec.

8-23

Let volume manager decide what disks to use for this volume, and
click Next to continue.

e Enter volume name and size, and leave the other options at their
default values. Click Next to continue.
Leave the "Create as a Snapshot Cache Volume" option unchecked
and click Next.

On the Create File System page, select to create a VxFS me system.


Enter the mount point called / narnel and verifv that Add to file
system table and (for Solarts) Mount at buot options are checked.
Click Next.

Verify the summary information, and click Finish.

Check if the tile system is mounted and verify that there is an entry for this file
system in the tile system table,

Select the File Systems node in the object tree and observe the list of
mounted me systems in the right pane view. The / narnel me system
should be listed here. Note the "Mounted" and "In File System Table"
columns.
You can also use the command line to verify the changes as follows:
Solaris

mount
cat

HP-lIX,
Linux

/etc/vfstab

mount
cat

/etc/fstab

The / narnel file system should show as mounted and there should be a
line in the me system table to ensure that it is mounted at boot time.
7

View the properties of the disk in the namedgl disk group and note the
Capacity and the Unallocated space fields,

Select Disks in the object tree, right click the disk in the narnedgl disk
group, and select Properties.
8

Try to create a second volume. namevo12. in the namedgl and specify a


size slightly larger than the unallocaicd space on the existing disk in the disk
group. for example 4g in the standard Symantec classroom systems, Do not
create a tile system on the volume, What happens?

8-24

Select the Volumes node in the object tree and select Actions->New
Volume.

In the New Volume Wizard, click Next on the welcome page.

Select the disk group name and click Next.

Let volume manager decide what disks to use for this volume, click
Next to continue.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


CO[Jyugt11
E 2006 Svmantec Corporauo. All nqhts r-sorveo

Enter volume name and size, and leave the other options at their
default values, click Next to continue.
Leave the Create as a Snapshot Cache Volume option unchecked. and
click Next.

9
h

On the Create File System page leave the "No tile system" option
checked and click Next.
Verify the summary information, and click Finish.

You should receive an error indicating that Volume Manager cannot


allocate the requested space for the volume, and the volume is not created.
9

Add a disk to the namedgl disk group.

a
b
e

d
e

Select the disk to be addcd to the disk group.


Select Actions->Add Disk to Disk Group.
Click Next on the Welcome page, verifv that the namedgl disk group
is selected and that the disk is listed under Selected disks, and click
Next.
Confirm "hen prompted.
Verify the summary information and click Finish.

10 Create the same volume. namevo12.in the namedgl disk group using the
same size as in step X. Do not create a tile system.

Select the Volumes node in the object tree and select Actions->Ncw
Volume.

In the New Volume Wizard, click Next on the welcome page.

e
d

Select the disk group name and click Next.


Let volume manager decide what disks to use for this volume, click
Next to continue.
Enter volume name and size, and leave the other options at their
default values, click Next to continue.

Leave the Create as a Snapshot Cache Volume option unchecked. and


click Next.
9

On the Create File System page leave the "No tile system" option
checked and click Next.

h Verify the summary information, and click Finish.


This time the volume creation should complete successfully.
11 Observe the volumes by selecting the Volumes object in the object tree. Can
you tell which volume has a mounted file system')

Select Volumes node in the object tree.


In the right pane view you should notice that the tile system and mount
point columns have tile system information for namevoll and not for
namevo12.

Lab 3 Solutions:

Creating

a Volume and File System


Copyriqht it; 2006 Symantec Corporation All rights reserved

8-25

12 Create a VxFS file system on namevo12 and mount it to / name2 directory.


Ensure that the file system is not mounted at boot time. Check if the / name2
tile system is currently mounted and verify that it has not been added to the file
system table.

Select the namevol2 volume and select Actions->File


New File System.

System->

Verity that the file system type is vxfs, enter the mount point, uncheck
the "Add to tile system table" option, and click OK.

c Select the File Systems node in the object tree and observe the list of
mounted file systems in the right pane view. The / name2 file system
should be listed here. Note the Mounted and In File System Table
columns.
You can also use the command line to verify the changes as follows:
Solaris

mount
cat /etc/vfstab

BP-lJX,
Linux

mount
cat /etc/fstab

The / name2 file system should show as mounted but there should be no
change in the file system table.
13 Observe the commands that were executed by VEA during this section ofthe
lab.

Select the Logs perspective in the quick accessbar.

Click Task Lug in the right pane view.

Observe the commands executed by VEA during this section of the lab
by double clicking the individual tasks and observing the Task Details
view.

Creating a Volume and File System: CLI


View all the disk devices on the system. What is the status of the disks assigned
to you for the labs')

vxdisk
-0 alldgs
list
If you completed the first section of this lab, you should have two disks in
namedgl in online
status. The rest of the disks assigned to you should
be in online
invalid
status. If you have a disk in error status,
contact your instructor.

B-26

VERITAS
Copynqht

'<',

Storage

2006 Symautec corooreuoo.

Foundation
All nqhls reserved

5.0 for UNIX. Fundamentals

Select an uninitialized disk and initialize it using the CLI. Observe the change
in the Status column. What is the status of the disk now'!

vxdisksetup

-i

device_tag

where device_
tag is c#t#d#
and sd# for Linux platforms.

vxdisk

-0

alldgs

for Solaris and IIP-UX,

for AIX

list

The status of the disk should change to online


CROUP columns should still be empty.
3

hdisk#

but the DISK and

Create a new disk group using the disk you initialized in the previous step.
Name the new disk group namedg2. Observe the change in the disk status.
Note: If you are sharing a disk array. make sure that the prefix you are using
for the disk group names is unique.

vxdg

init

namedg2 namedg20I=device_tag

where devi ce_ tag is c#t #d# for Solaris and HP-UX,
and sd# for Linux platforms.

vxdisk

-0

alldgs

hdisk#

for AIX

list

The status of the disk is still online


but the [)ISK and CROUP columns
now show the new disk media name and the disk group name respectively.
4

Using the vxassist

command. create a new volume of size Ig ill namedg2.

Name the new volume namevo13.

vxassist
5

-g namedg2 make namevo13 Ig

Create a Vcritas file system on the namevo13 volume. mount the file system
to the / name3 directory.

mkfs

-F vxfs

/dev/vx/rdsk/namedg2/namevo13
use mkf s - t.

Note: On LiIllIX,

mkdir
mount

/name3
-F vxfs

Note: On Llnux,

Lab 3 Solutions:

Creating

/dev/vx/dsk/namedg2/namevo13
use mount -to

B-27

a Volume and File System


Copyright

I[)

2006

Symantec.

Corporation

/name3

All nqhts. reserved

Make sure that the tile system is mounted at boot time.

vi

Solaris

/etc/vfstab

...
/dev/vx/dsk/namedg2/namevo13
/dev/vx/rdsk/namedg2/namevo13
/name3 vxfs o yes UP-LJX

vi

\
\

/etc/fstab

...
/dev/vx/dsk/namedg2/namevo13
rw,largefiles,delaylog

/name3

vxfs

0 2

Unmount the / name3 tile system, verify the un mount, and remount using the
-a command to mount all tile systems ill the file system table.

mount

umount /name3
mount
mount -a
mount
7

the amount of free space in the namedg2 disk group. TIY to create a

Identity

volume in this disk group named namevo14 with a size slightly

larger than

the avai lablc free space. for example 5g on standard Symantcc classroom
systems. What happens'?
Note: The disk sizes in Symanicc Virtual Academy lab environments are
slightly less than 2g. Ensure that you use the correct value suitable to your
environment instead of the 5g example used here.

vxdg

-g namedg2 free

The free space is displayed

vxassist

in sectors in the LENGTH column.

-g namedg2 make namevo14

5g

You should receive an error indicating


that Volume Manager cannot
allocate the requested space for the volume, and the volume is not created.
8

Initialize

a new disk and add it to the namedg2 disk group. Observe the

change in free space.

vxdisksetup
-i device_tag
vxdg -g namedg2 adddisk
namedg202=device_tag
where device
tag is c#t#d# for Solaris
and HP-UX,
hdisk#
for AIX and sd# for Linux platforms.
vxdg -g namedg2 free
9

Create the same volume. namevo14, in the namedg2 disk group using the
same size as in step 7.

vxassist

-g namedg2 make namevo14

8-28

VERITAS
Copyright

_~'200G

S)"IWlO>tE'C

Storage
CorpOf(:l!iuli

5g

Foundation
All

fights

reserved

5,0 for UNIX: Fundamentals

Note: The 5g volume size is used as an example here. You may need to use
a value more suitable to your lab environment if you arc not working in a
standard Symantec classroom.
This time the volume creation should complete successfully.
10 Display volume information for namedg2 disk group using the
vxprint
-g namedg2 -htr command. Can you identify which disks arc
used for which volumes?
vxprint
-g namedg2 -htr
11 List the disk groups on your system using the vxdg list command.
vxdg list
If you have followed the labs so far, you should have two disk groups listed
namedgl and namedg2.

Lab 3 Solutions:

Creating

a Volume and File System


Copyright IE 2006 Svmamec Corporation All rights reserved

B-29

Removing

Volumes,

Disks and Disk Groups:

ell

Unmount the / name3file system and remove it from the file system table.
Solaris

umount /name3
vi /etc/vfstab
Navigate to the line with the entry corresponding to the /name3 lile
system and type dd 10 delete the line.
Type :wq

HP-UX,
Linux

10

save and close the tile.

umount /name3
vi /etc/fstab
Navigate 10 the line with the entry corresponding to the /name3 file
system and type dd 10 delete the line.
Type :wq

10

save and close the file.

Remove the namevo14 volume in the namedg2disk group. Observe the disk
group configuration information using the vxprint
-g namedg2 -htr
command.
vxassist
vxprint

-g namedg2 remove
-g namedg2 -htr

volume

namevol4

There should be only namevol3 volume, and the second disk,


namedg202, should be unused.
3

Remove the second disk (namedg202) from the namedg2disk group.


Observe the change in its status.
vxdg -g namedg2 rmdisk
vxdisk
-0 alldgs
list

namedg202

Note that the disk is still ill online


4

Destroy the namedg2disk group.


vxdg

8-30

state; it is initialized.

destroy

namedg2

Observe the status of the disk devices on the system.


vxdisk
-0 alldgs
list

VERITAS

Storage

Foundation

CopynytH ~ 2006 Svmantec Con.orauon Alillyn~ reserved

5.0 for UNIX: Fundamentals

Removing Volumes, Disks and Disk Groups: VEA


Unmount both / namel and / name2 file systems using YEA. Accept to
remove the file systems from the file system table ifprompted. Check if the file
systems are unmounted and veri fy that any corresponding entries have been
removed from the file system table.
a

Select the File Systems node in the object tree and select / namel file
system.

Select Actions->Unmount

Confirm the unmount and select Yes when prompted to remove it


from the file system table.

Select the / name2 tile system. Select Actions->Unmount


Confirm the unmount.

File System.

File System.

Both tile systems should disappear from the file system list in VEA. You
can use the command line to verify the changes as follows:
Solaris

mount

U1'-l'X.
Linux

mount

/etc/vfstab

cat

cat

/etc/fstab

The / namel and / name2 file systems should not be among the mounted
tile systems, and the tile system tahle should not contain any entries
corresponding to these tile systems.
2

Remove the namevo12 volume in the namedgl disk group.


a
b

Select the Volumes node in the object tree and select namevol2
volume.
Select Actions->Delete Volume. Cnnflrm when prompted.

Select the Disk Groups node in the object tree and observe the disks in the
namedgl disk group. Can you identify which disk is empty')

The %Used column should show 0% for the unused disk which is the
second disk in the disk group (namedgl02).
4

Remove the disk you identified as empty from the namedgl disk group.
Select the empty disk and select Actions->Remove Disk From Disk
Group.

Observe all the disks on the system. What is the status of the disk you removed
from the disk group?
Select the Disks node in the object tree and observe the disks in the right
pane view.

Lab 3 Solutions:

Creating

a Volume and File System


Copyrigh10 2006 swoeotec Corporation. All !ighls reserved

8-31

The disk removed in step 4 should be in Free state.


6

Destroy the namedgl disk group.

Select the Disk Croups node in the object tree and the namedgl disk
group in the right pane view.

Select Actions->Destroy

Disk Croup. Confirm when prompted.

Observe all the disks on the system. What is the status of the disks?

Select the Disks node in the object tree and observe the disks in the right
pane view.
If you have followed all the lab steps, you should have 4 disks in Free state;
they are already initialized but not in a disk group.

8-32

VERITAS
Copyngtll

Storage

e. 20U6 SYIl1<1l1le{; Corpor auon

Foundation
A,II flgtlt<: reservert

5.0 for UNIX: Fundamentals

'symanl<x.
Lab 4
Lab 4: Selecting Volume Layouts
In this lab, you create simple concatenated
volumes, striped volumes, and mirrored
volumes.

You also practice creating a layered volume and


using ordered allocation while creating volumes.

For Lab Exercises,


For Lab Solutions,

Lab 4 Solutions:

see Appendix
see Appendix

A.
B.

Selecting Volume Layouts

In this lab. you create simple concatenated volumes. striped volumes. and mirrored
volumes. You also practice creating a a layered volume and using ordered
allocation while creating volumes.
The Lab Exercises for this lab are located on the following

page:

Prerequisite Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this. you also need four empty and unused
external disks to be used during the labs.

Lab 4 Solutions:

Selecting

8-33

Volume Layouts
Copvnqtn b 2006

Symarrter;

Cmpor3MI1

All righlS

reserved

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference. you may record the information here, or refer
back to the first lab where you initially documented this information.

Oh.jeet

Sample Value

root password

veritas

Host name

trainl

My Data Disks:

Sularis: c It #dO -

Your Value

clt#d5
IIP-UX: c4 t OdD
c4tOd5
AIX:hdisk21hdisk26
l.inux: sda
Prefix to be used with
object flames

sdf

name

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

8-34
Ccpynqnt

';' I'U[}6

Svn.antec

Corporauon

All lights

rcscrveo

Creating Volumes with Different layouts: Cli


Add four initialized disks to a disk group called namedg. Verify your action
using vxdisk
-0 alldgs
list.
Note: If you arc sharing a disk array. make sure that the prefix you are using
for the disk group name is unique.
a

If you have completed the Creating a Volume and File System lab (lab
3), you should already have four initialized disks. If not, initialize four
disks for use in Volume Manager:
vxdisksetup

-i

device_tag

where devi ce _ tag is c#t#d# for Solaris and HP-UX, hdisk#


AIX and sd# for Linux platforms.

for

(Do the above command for anv disks that have not been initialized for
Volume Manager use and that will be used in this lab.)
Create a new disk group and add disks:
vxdg init
namedg namedgOl=devicel_tag
\
namedg02=device2_tag
namedg03=device3_tag
namedg04=dev~ce4
tag

Alternatively, you cau also create the disk ~roup using a single disk device
and then add each additional disk as follows:
vxdg

-g

namedg adddisk

namedg##=device_tag

Create a 50-MB concatenated volume in namedg disk group called


namevoll with one drive.
vxassist
-g namedg make namevoll
SOm
Display the volume layout. What names have been assigned to the plex and
subdisks?

To view the assigned names, view the volume using:


-g namedg -thr
I more

vxprint

Remove the volume.


vxassist

-g

namedg remove

volume

namevoll

Create a 50-MB striped volume on two disks in namedg and specify which
two disks to use in creating the volume. Name the volume namevo12.
vxassist
-g namedg make namevol2
SOm layout=stripe
\
namedgOl

namedg02

What names have been assigned to the plex and subdisks?


To view the assigned names, view the volume using:
vxprint

Lab 4 Solutions:

-g

Selecting

nallledg

-thr

more

Volume Layouts
Copyright'C 2006 Syrnaruec Corporation, A.IIri9nt>. reserved

8-35

Create a 20-MB, two-column striped volume with a mirror in namedg. Set the
stripe unit size to 256K. Name the volume namevo13.
vxassist
-g namedg make namevol3 20m \
layout=mirror-stripe
neol=2 stripeunit=256k
What do you notice about the plexes?
View the volume using vxprint
-g namedg -thr
I more.
Notice that you now have a second plex.

Create a 20-MB. two-column striped volume with a mirror. Set the stripe unit
size to 128K. Select at least one disk that you should not use. Name the volume
namevo14.

vxassist
-g namedg make namevo14 20m \
layout=mirror-stripe
neol=2 stripeunit=128k
lnamedg03

Was the volume created?


This operation should fail because there are not enough disks available in
the disk group. A two-column striped mirror requires at least four disks.
8

Create a 20-MB 3-column striped volume with a mirror. Specify three disks to
be used during VOIUlllC creation. Name thc volume namevo14.
vxassist
-g namedg
layout=mirror-stripe
namedg03

-b make namevol4 20m \


neol=3 namedgOl namedg02

Was the volume created')


Again, this operation should fail because there are not enough disks
available in the disk group. At least six disks are required for this type of
volume configuration.
9

Create the same volume specified in the previous step, but without the mirror.
vxassist
neol=3

-g namedg
namedgOl

-b make namevol4 20m layout=stripe


namedg02 namedg03

What names have been assigned to the plcx and subdisk s?


To view the assigned names, view the volume using:
vxprint

-g namedg

-thr

more

10 Remove the volumes created in this exercise.


vxassist
vxassist
vxassist

-g namedg
-g namedg
-g namedg

remove
remove
remove

volume
volume
volume

namevol2
namevol3
namevo14

11 Remove the disk group that was used in this exercise.


vxdg

destroy

namedg

8-36

VERITAS
Copyngllt

t;

Storage Foundation

2006 Svmantec Corporatmn AlIl!ghts reserveo

5.0 for UNIX: Fundamentals

Creating Volumes with Different Layouts: VEA


If you had exited out orVEA.

start it and connect back to your system.

vea&
2

Add four initialized


the main window.

a
b
e

In the New Disk Group wizard, skip the welcome page, specify the disk
group name, select the disks you want to use from the Available Disks
list, and click Add.
Click Next, confirm your selection, do not select any Organization
Principle, and click Finish.

Select the namedg disk group, and select Actions->New

In the New Volume wizard, let VxVI\1 determine which disks to use.

Type the name of the volume, and specify a size of 50 MB. Verify that
the Concatenated layout is selected in the Layout region.

Complete the wizard by accepting all remaining defaults to create the


volume.

Volume.

Display the volume layout. Notice the naming convention ofthe plcx and
subdisk.

In the System perspective, drill down to the Disks node in the object
tree.
Select a disk, and select Actions->New Disk Group.

Create a 50-MB concatenated volume in namedg disk group called


namevoll
with one drive.

disks to a disk group called namedg. Verify your action in

Select the volume in the object tree, and select Actions->Volume


View.
In the Volume View window, click the Expand button. Compare the
information in the Volume View window to the information under the
Mirrors, Logs, and Subdisks tabs in the right pane of the main
window.

Remove the volume.

Select the volume, and select Actions->Dclete

In the Delete Volumc dialog box, click Yes.

Volume.

Create a 50-MB striped volume on two disks in namedg. and specify which
two disks to use in creating the volume. Name the volume namevo12.

a
b

Lab 4 Solutions:

Select the namedg disk group, and select Actions->New Volume.


In the New Volume wizard, select "Manually select disks to usefor this
volume." Move two disks into the Included box, and then click Next.
Selecting

Volume Layouts
Copyright 2006 Svrnantec Corporation. All nqhts reserved

B-37

Type the name of the volume, and specify a size of 50 MB.

Select the Striped option in the Layout region. Verify that the number
of columns is 2.

Complete the wizard by accepting all remaining defaults to create the


volume,

View the volume.

Select the volume, and select Actions->Volume

Close the Volume View window when you arc satisfied.

View.

Create a 20-MI3. two-column striped volume with a mirror in namedg. Set the
stripe unit size to 256K. Name the volume namevo13.

Select the namedg disk group, and select Actions->New

In the New Volume wizard, let VxVM determine which disks to use.

Type the name of the volume, and specify a size of 20 MB.

Select the Striped option in the Layout region. Verify that the number
of columns is 2. Set the Stripe unit size to 256K (512 sectors on Solaris,
AIX, and Linux, 256 sectors on HP-lJX).

Mark the Mirrored check box in the Mirror

Volume.

Info region.

Complete the wizard by accepting all remaining defaults to create the


volume,
View the volume. Notice that you now have a second plcx.

Select the volume, and select Actions->Volume

Close the Volume View window when you arc satisfled.

Create a 20-MI3. two-column

View.

striped volume with a mirror. Set the stripe unit

size to 128K. Select at least one disk you should nor use. Name the volume
namevo14.

Select the namedg disk group, and select Actions->New

In the New Volume wizard, select "Manually select disks to use for this
volume." Muve one disk into the Excluded box, and then click Next.

Volume.

Tvpe the name of the volume, and specify a size of 20 MB.

Select the Striped option in the Layout region. Verify that the number
of columns is 2. Set the Stripe unit size to 256 (sectors), or 12HK.

Mark the l\1irrored check box in the Mirror

Info region.

Complete the wizard by accepting all remaining defaults to create the


volume.
Was the volume created?

This operation should fail, because there arc not enough disks available in
the disk group. A two-column striped mirror requires at least four disks.

VERITAS

8-38
Ccpvnqtu

~ L006 Symantec

Storage Foundation
Corpcreuon

All ngtlts reserved

5.0 for UNIX: Fundamentals

Createa 20-MB 3-column striped volume with a mirror. Specify three disks to
be usedduring volume creation. Name the volume namevo14.
a Select the namedg disk group, and select Actions->New Volume.
b In the New Volume wizard, let \'xVI\1 determine which disks to use.
e
d
e

Type the name of the volume, and specify a size of 20 MB.


Select the Striped option in the Layout region. Change the number of
columns to 3.
Mark the Mirrored check box in the Mirror Info region. Click Next.

You receive an error and are not able to complete the wizard.
Wasthe volume created?
Again, this operation should fail, becausethere arc not enough disks
available in the disk group. At least six disks arc required for this type of
volume configuration.
10 Createthe samevolume specified in step9. but without the mirror.
Note: If you did not cancel out of the previous step. thenjust unchcck the
mirrored option and continue the wizard.
a

Select the namedg disk group, and select Actions->New

In the New Volume wizard, let VxVM determine which disks to use.

Volume.

Type the name of the volume, and specify a size of 20 MB.

Select the Striped option in the Layout region. Change the number of
columns to 3.

Complete the wizard by accepting all remaining defaults to create the


volume.

Wasthe volume created"


Yes, the volume is created this time.
11 Delete all volumes in the namedg disk group.
a

Select the namedg disk group, then select the Volumes tab.

Highlight all volumes in the window.

Select Actions->Delete

Click Yes To All.

Volume.

12 View the commandsexecutedby YEA during this section or the lab.


a Select the Logs perspective in the quick accessbar.
b Click Task Log in the right pane view.
e Double-click the individual tasks and observe the Task Details view.

Lab 4 Solutions:

Selecting

Volume Layouts
Copyright if: 2006 Symantcc Corporation. All rigt!ts reserved

8-39

Creating

Layered Volumes

You can complete this exercise using either the VEA or the CLl interface.
Solutions are provided for both.
Note: In order to perform the tasks in this exercise, you should have at least four
disks in the disk group that you are using.
First, ensure that any volumes created in the previous labs are removed from
the namedgdisk group.

VEA
a

Select the namedg disk group and click the Volumes tab in the right
pane view.

To remove a volume, highlight the volume in the window, and select


Actions=c-Dclete Volume.

eLl
vxprint
-g namedg -htr
I more
For each volume in the namedg disk group:
vxassist
-g namedg remove volume volume_name
2

Create a I OO-MB Striped Mirrored volume with no logging. Name the volume
namevoll.

VEA
a
b
e
d
e

Select the namedg disk group, and select Actions=c-New Volume.


In the New Volume wizard, let \'xVI\1 determine which disks to use.
Type the volume name, specify a volume size of 100 I\IB, and select a
Striped Mirrored layout.
Ensure that the Columns and the Total mirrors fields arc both set to
the default value of 2.
Complete the wizard by accepting all remaining defaults to create the
volume.

CLI
vxassist
-g namedg make namevoll
layout=stripe-mirror
nmirror=2
3

8-40

100m \
ncol=2

If you are using VEA, view the commands executed by VEA to create the
namevoll volume during this section of the lab.

VERITAS Storage Foundation


COPYright ',' 2006 Svmanu-c Corporauoo All rt(.lhts rc sorveo

5.0 for UNIX: Fundamentals

Select the Logs perspective in the quiek accessbar.

Click Task Log in the right pane view.

Double-click the specific task and observe the Task Details view.

Create a Concatenated Mirrored volume with no logging called namevo12.


The size of the volume should be greater than the size of the largest disk in the
disk group; for example. if your largest disk is 4 GB. then create a 6-GB
volume.
Note: If you are working in the Virtual Academy (VA) lab environment, your
largest disk will have a size 01'2 GB. In this environment. you can use a 3GB
volume size.

VEA
a

Select the namedg disk group, and select Actions->New

b
e

In the New Volume wizard, let VxVM determine which disks to use.
Type the volume name, an appropriate volume size, and select a
Concatenated Mirrored layout.

Volume.

d
e

Ensure that the Total mirrors field is set to the default value of 2.
Complete the wizard by accepting all remaining defaults to create the
volume.

CLI
vxassist

-g

namedg

-b

layout=concat-mirror

make

namevo12

6g

nmirror=2

I I' you arc using VEA. view the commands executed by VEA to create the
namevo12 volume during this section of the lab.

a
b

Select the Logs perspective in the quick aeeessbar.


Click Task Log in the right pane view.

Double-click the specific task and observe the Task Details view.

View the volumes and compare the layouts.

VEA
a

Uighlight the namedg disk group and select Actions->Volumc

View.

b Click the Expand button in the Volumes window.


You can also highlight each volume in the object tree and view
information in the tabs in the right pane.
CLJ

Lab 4 Solutions:

Selecting

8-41

Volume Layouts
Copvnqn

'c

2006

Symantec

Corporation

A11li'lllt<;

reserved

vxprint

-g namedg -htr

more

Remove all of the volumes in the namedgdisk group.

VEA
a

Select the namedg disk group, and click the Volumes tab in the right
pane view.

Highlight all volumes in the window.

e
d

Select Actiuns->Delete
Click Yes To All.

Volume.

eLl
vxassist
vxassist

-g namedg remove
-g namedg remove

Using Ordered Allocation

volume
volume

namevoll
namevol2

While Creating Volumes

You can complete this exercise using either the VEA or the CLI interface.
Solutions are provided for both.
Note: In order to perform the tasks in this exercise, you should have at least four
disks in the disk group that you arc using.
Create a 20-MB. two-column striped volume with a mirror in the namedgdisk
group. Name the volume namevoll.

VEA
a

Select the namedg disk group, and select Actions->New

In the New Volume wizard, let VxVM determine which disks to use.

Volume.

Type the name of the volume, and specify a size of 20 M B.

Select the Striped option in the Layout region. Verify that the number
of columns is 2.

Mark the Mirrored

check box in the Mirror

Info region.

Complete the wizard by accepting all remaining defaults to create the


volume.
CLI

B-42

VERITAS

Storage

Foundation

Copyrlght'~ 20il6 Syrnantec Corporation All ngtlls reserved

5.0 for UNIX: Fundamentals

vxassist
-g namedg
layout=mirror-stripe

make

namevoll
ncol=2

20m \

Display the volume layout. l low are the disks allocated in the volume') Which
disk devices are used':'.
VEA
a

Select the Volumes node in the object tree and namevoll


pane view.

in the right

Select Actions->Layout
View. Note the plex number and the column
number for each subdisk on each disk.

eLl
vxprint

-g

namedg

-htr

Notice which two disks are allocated to the first piex and which two disks
are allocated to the second piex and record your observation.
3

Remove the volume you just made. and re-create it by specifying the four disks
in an order different trorn the original layout. Use the command line to create
the volume in this step.

eLl
vxassist

-g

namedg

remove

vxassist
-g namedg
layout=mirror-stripe
namedg02
namedgOl

-0

volume

ordered
ncol=2

namevoll

make namevoll
20m \
namedg04
namedg03
\

Display the volume layout. I low are the disks allocated this time".

VEA
a

Select the Volumes node in the object tree and namevoll


pane view.

in the right

Select Actions->Layout
View. Note the plcx number and the column
number for each subdisk on each disk.

eLl
vxprint

-g

namedg

-htr

The plexes are now allocated in the order specified on the command line.

Lab 4 Solutions:

Selecting

8-43

Volume Layouts
Copyright~;;

2006

Symaruec

Corporation

All rights

teservec

Remove all of the volumes in the namedg disk group.

VEA
a

Select the namedg disk group and click the Volumes tab in the right
pane vicw.

Highlight all volumes in the window.

c
d

Select Actions->Delctc
Click Yes To All.

Volume.

eLl
vxassist

-g namedg remove

volume

namevoll

Optional Lab: Creating Volumes with User Defaults: CLI


This optional guided practice illustrates how to use the files:

lete/default/vxassist

/ete/default/alt

vxassist

to create volumes with defaults specified by the user. Note that some of the default
values may not apply to VEA because VEA uses explicit values for number of
columns, stripe unit size. and number of mirrors while creating striped and
mirrored volumes.

Create two tiles in jete/default:

cd fete/default
a

Using the vi
following:

editor, create a tile called vxassist

# when mirroring
nmirror=3
b

Using the vi

create

three

that includes the

mirrors

editor, create a tile called alt_vxassist

that includes the

following:

# use 256K as the


# regular
volumes
stripeunit=256k

default

stripe

unit

size

for

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

B-44
Copynght

'j"

';;O[)fiSvmantec Corporation All rights reserved

Use these files when creating the following volumes:


a

Create a I OO-MB volume called namevoll

using Layout erni r r o r:

vxassist
-g namedg make namevo11
1ayout=mirror
b

Create a IO()-MB, two-column stripe volume called namevo12 using -d


alt_vxassist
so that Volume Manager uses the default file:
vxassist
-g namedg
100m 1ayout=stripe

100m \

-d a1t

vxassist

make namevo12

View the layout of these volumes using VEA or by using vxprint


What do you notice?

-g

namedg -htr.

The first volume should show three plexes rather than the standard
two.
The second volume should show a stripe size of 256K instead of the
standard 64K.
4

Remove any vxassist default Illes that you created in this optional lab
section. The presence of these files can impact subsequent labs where default
behavior is assumed.
rm /etc/defau1t/vxassist
rm /etc/defau1t/a1t

vxassist

Remove all of the volumes in the namedg disk group.


vxassist
-g namedg remove volume namevoll
vxassist
-g namedg remove volume namevol2

Lab 4 Solutions:

Selecting

8-45

Volume Layouts
Copyri~t1t

(t

2006

Svmantec

Corporation

All rigtlts

reserved.

6-46

VERITAS
COPYright

.~ ;:'006

Svmantec

Storage
Corporation.

Foundation

All fights

reserved

5.0 for UNIX' Fundamentals

symnnter.

Lab 5
Lab 5: Making Basic Configuration

Changes

This lab provides practice in making basic


configuration changes.
In this lab, you add mirrors and logs to
existing volumes, and change the volume
read policy. You also resize volumes, rename
disk groups, and move data between systems.

For Lab ExerCise. s, see Appendix


For Lab Solutions,
see Appendix

Lab 5 Solutions: Making Basic Configuration

A. ]
B.

Changes

This lab provides practice in making basic configuration changes. In this lab, you
add mirrors and logs to existing volumes. and change the volume read policy. You
also rcsize volumes. rename disk groups, and move data between systems.
The Lab Exercises for this lab arc located on the following page:

Prerequisite Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this. you also need four external disks to he
used during the labs.
At the beginning of this lab, you should have a disk group called namedg that has
four external disks and no volumes in it.

Lab 5 Solutions:

Making Basic Configuration


Copyright

c. 2006

Changes

Symantec Cnrporauon. All fights reserved

B-47

Classroom Lab Values


In preparation for this lab. you will need the following information about your lab
environment. For your reference. you may record the information here. or refer
back to the first lab where you initially documented this informal ion.

Ohject

Sample Value

root password

veritas

Host name

trainl

Host name of the system


sharing disks with my
system
(my partner system)

train2

\1y Data Disks:

Solaris: c It #dO clt#d5

Your Value

HP-UX: c4tOdO
c4tOd5

AIX:hdisk21hdisk26
l.inux: sda
2nd Internal Disk:

sdf

Solaris: cOt2dO
HP-UX: c3t15dO
AIX: hdiskl
Linux: hdb

B-48

l.ocation of Lah Scripts


(if any):

/student/labs/sf/
sf50

Prefix to he used with


ubject names

name

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyrlght'.[; 2006 Svmantec Corporation

All

notus reserved

Administering

Mirrored Volumes

You can complete this exercise using either the VEA or the CLl interlace.
Solutions arc provided lor both.
Note: In order to perform the tasks in this exercise, you should have at least four
disks in the disk group that you are using.
Ensure that you have a disk group called namedg with four disks in it. I f not.
create the disk group using four disks.
Note: I I' you have completed the previous lab steps you should already have
the namedg disk group with four disks and no volumes.

YEA
node in the ohject tree and select namedg.

Select the Disk Groups

err
vxdisk
2

-0

alldgs

Create a 50-M8.

list

two-column

striped volume called namevoll

in namedg.

VEA
a

Select the namedg disk group, and select Actions->New

In the New Volume wizard,

Type the volume name, specify a volume size of 50 MR, and select a
Striped layout.

Complete
volume.

the wizard

let Vx\'1\1 determine

by accepting

all remaining

Volume.

which disks to use.

defaults to create the

CLI
vxassist
-g namedg make namevoll
layout=stripe
ncol=2
3

SOm \

Display the volume layout. l low are the disks allocated ill the volume'! Note
the disk devices used for the volume.

YEA
node in the object tree and namevoll

Select the Volumes


pane view.

Select Actlons=c-Lavour

in the right

View.

Note the disk devices used for the first plex.

Lab S Solutions:

Making Basic Configuration


Copynght

'Z' 2006

B-49

Changes

Symantec

Corporation

All rights

resorveo

eLl
vxprint

-g namedg -htr

Notice which two disks are allocated to the first plex and record your
observation.
4

Add a mirror to namevo 11. and display the volume layout. What is the layout
of the second plex? Which disks arc used for the second plcx?

VEA
a

Highlight the volume to be mirrored, and select Actions->Mirror->


Add.

Accept the defaults in the Add Mirror dialog box and click OK.

Select namevoll

and select Aetions->Layout

View.

Note the disk devices used for the second plex. Note that the default layout
used for the second plex is the same as the flrst plex,

CLl
vxassist
-g namedg mirror
Ilamevoll
vxprint
-g namedg -htr
Note the disk devices used for the second plex, Note that the default layout
used for the second plex is the same as the first plex.
5

Add a dirty region log to namevo11 and specify the disk to use tor the DRL.
Display the volume layout.

VEA
a

Highlight the namevoll

In the Add Log dialog box, select Manually assign destination disks.

volume, and select Actions->Log->Add.

c
d

Select one of the disks and click Add to add it to the Selected disks list.
Click OK to complete.

Note: If you receive an error messageindicating that YEA could not


allocate enough space for the log, ignore the message.
e

Highlight the volume under the Volumes node in the object tree and
click the Logs tab.

CLI
vxassist
-g namedg addlog
nallledgOl
vxprint
-g namedg -rth

6-50

VERITAS
Ccpynqht

,t

2006

Syruantec

namevoll

Storage
Corporauon

Foundation
All nqhts

reserved

logtype=drl

5.0 for UNIX. Fundamentals

Add a second dirty region log to namevoll and speci fy another disk to use
for the DRL. Display the volume layout.

VEA
a
b
c

Highlight the namevoll volume. and select Actions->log->Add.


In the Add Log dialog box. select Manually assign destination disks.
Select one of the disks and click Add to add it to the Selected disks list.

Click OK to complete.

Note: If you receive an error messageindicating that VEA could not


allocate enough space for the log. ignore the message.
e

Highlight the volume under the Volumes node in the object tree and
click the Logs tab.

eLl
vxassist
-g namedg addlog
namedg02
vxprint
-g namedg -rth
7

namevoll

logtype=drl

Remove the first dirty region log that you added to the volume. Display the
volume layout. Can you control which log was removed?

VEA
a
b
c
d

Highlight the namevoll volume. and select Actions-->Log->


Remove.
In the Remove Log dialog box. select the log you want to remove and
click Add to add it to the Selected logs list.
Click OK and confirm when prompted to complete.
Highlight the volume under the Volumes node in the object tree and
click the Logs tab.

eLl
vxassist
vxprint
8

-g namedg remove
-g namedg -rth

log

namevoll

!namedgOl

Find out what the current volume read policy for namevoll is. Change the
volume read policy to round robin. and display the volume layout.
VEA
a

Lab 5 Solutions:

Right click the namevoll volume in the right pane view, and select
Properties. Observe the existing value of the Read policy lield. It
should indicate the default value of Based on layouts.

Making Basic Configuration

B-51

Changes

Copyright .~ 2006 Swnaruec Corporation

AlIl1gtlls reserved.

Highlight the namevoll


Usage.

volume, and select Actions->Set

Volume

Select the Round robin option and click OK.

Right click the namevoll


volume in the right pane view, and select
Properties. Observe the existing value of the Read policy field. It
should have changed to Round robin.

CLl
vxprint
-g namedg -htr
You should observe that the read policy shows as SELECT which is the
value used for selected based on layouts.
vxvol
-g namedg rdpol
round Ilamevoll
vxprint
-g namedg -rth
The value of the attribute will change to ROUND.
9

Remove the original mirror (namevollthe volume layout.

01) from namevoll,

and display

VEA
a

Highlight the namevoll


volume in the object tree, and click the
Mirror~ tab in the right pane.

Right-click a plex, and select Acrlons=-c-Removc Mirror.

III

Highlight the namevoll

the Remove Mirror dialog box, click Yes.


volume, and select Actions->Layout

View.

Note that the DRL log is not removed automatically when you remove the
mirror by specifying the plex name.
CLl
vxassist
-g namedg remove mirror
namevoll
\
! disk_used_by_original_mirror
vxprint
-g namedg -rth
Note that the DRL log will also he removed automatically with this
command because the volume is IIU longer mirrored.
10 Rcmovcnamevoll.
VEA
Highlight the namevoll

8-52

volume, and select Actions=c-Delete Volume.

VERITAS

Storage Foundation

C0Pyrlgh! ~ 20M Svmantec Corporation All rights reserved

5.0 for UNIX: Fundamentals

eLl
vxassist

-g namedg remove volume

namevoll

Resizing a Volume
You can complete this exercise using either the VEA or the ell interface.
Solutions are provided for both.
If you have not already done so. remove the volumes created in the previous
lab in namedg.
\lEA

For each volume in your disk group, highlight the volume, and select
Actions=-c-Dclctc Volume.

eLl
vxassist
2

-g namedg remove volume

volume_name

Create a 20-MB concatenated mirrored volume called namevoll in namedg.


Create a Veritas tile system on the volume and mount it to !namel. Make sure
that the file system is not added to the tile system table.
\lEA
a
b
c

Highlight the namedg disk group, and select Actions->New Volume.


Specify a volume name, the size, a concatenated layout, and select
mirrored.
Ensure that "Enable logging" is not checked.

Add a VxFS tile system and set the mount point. Uncheck the Add to
tile system table option.

Complete the wizard.

eLl
vxassist
-g namedg make namevoll
20m layout=mirror
mkfs -F vxfs
/dev/vx/rdsk/namedg/namevoll
Note: On Linux, usemkf s - t.
mkdir
/ namel (if necessary)
mount -F vxfs
/dev/vx/dsk/namedg/namevoll
/namel
Note: On Linux, usemoun t - t.

Lab 5 Solutions: Making Basic Configuration Changes


Copyright 'i- 2006 Symantec Corporation. All rights reserved

B-53

View the layout of the volume and display the size of the file system.

VEA
Highlight the volume in the object tree and click each of the tabs in the
right pane to display information about Mirrors, Logs, and Subdisks.
You can also select Actions->Volume
View, click the Expand button, and
compare the information to thc main window.
To view the tile system size, select the File Systems node in the object tree
and observe the Size column for the / namel file system in the right pane
view.

eLl
Solaris,
Linux,
AIX

vxprint

IIP-UX

vxprint
bdf

namedg

-rth

-g

llamedg

-rth

/namel

Add data to the volume by creating a file in the file system and verity that the
tile has been added.
echo

-g

-k /namel

df

"hello

name"

>

/namel/hello

Expand the tile system and volume to 100 MS. Observe the volume layout to
see the change in size. Display tile system size.

VEA
a

Highlight the volume and select Actions->Resize

In the Resize Volumc dialog box, spcclty 100 M8 in the "New volume
size" field, and click OK.

Right click the volume and select Properties to observe the change in
size.

For thc file system size, select the File Systems node in the object tree
and observe the Size column for the / namel file system.

B-54

VERITAS
CDpynght";

2006

Symantec

Storage
Couiorauor,

Foundation
AlIlIgt1ls

ro servec

Volume.

5.0 for UNIX: Fundamentals

eLl
Solaris,
Linux,
AIX

vxresize

HP-I.IX

vxresize

df

-k

-g

namedg

namevoll

100m

-rth

/namel

vxprint
bdf

namedg

-g

vxprint

namedg

-g
-g

namedg

namevoll

100m

-rth

/namel

Resizing a File System Only: ell


Note: This exercise should be performed using the command line interface
because the VEA docs not allow you to create a file system smaller in size than the
underlying volume. You also cannot change the size of the volume and the file
system separately using the GUI.
1

Create a 50-MB concatenated volume named narnevo12 in the narnedg disk


group.
vxassist

-g

namedg

make

-F

vxfs

/dev/vx/rdsk/namedg/namevo12

Note: On Linux, usemkfs

40m

-to

Create a mount point /narne2 on which to the mount the file system. ifit does
not already exist.
mkdir

SOm

Create a Veritas file system on the volume by using the mkf s command.
Specify the file system size as 40 MB.
mkfs

namevo12

/ name2 (if necessary)

Mount the newly created file system on the mount point /narne2.
mount

-F

vxfs

/dev/vx/dsk/namedg/namevo12

Note: On Limn, usemount

/name2

- t.

Lab 5 Solutions: Making Basic Configuration Changes


Copyrigll1 & 2006 Syrrmntcc Corporation. All rights reserved

B-55

Verify disk space using the df command (or the bdf command on HP-UX).
Observe that the total size of the file system is smaller than the size of the
volume.
Solaris,
Linux,

df

-k

AIX
B!'-UX

Expand the file system to the full size of the underlying


fsadm -b newsize option.

fsadm
7

bdf

-b 50m -r

volume using the

/name2

/dev/vx/rdsk/namedg/namevol2

Verify disk space using the df command (or the bdf command on HP-UX).
Solaris,

-k

df

Linux,
AIX
UI'-UX

bdf

Make a tile on the tile system mounted at / name2, so that the free space is less
than 50 percent of the total tile system size.

dd if=/dev/zero
9

of=/name2/25_mb

bs=1024k

count=25

Shrink the tile system to 50 percent of its current size. What happens'?

fsadm -b 25m -r /dev/vx/rdsk/namedg/namevol2


/name2
The command fails. You cannot shrink the tile system because blocks are
currently in use.
Renaming

a Disk Group

You can complete this exercise using either the VEA or the CLI interface.
Solutions are provided for both.
1

Try to rename the namedgdisk group to namedgl while the / namel and
/ name2 tile systems are still mounted. Can you do it'?

VEA
a

Highlight the namedg disk group, and select Actions->Rename


Group.

Type in the new name and click OK.

Disk

You receive an error messageindicating that the volumes in the disk group
are in use.

VERITAS Storage Foundation

8-56
Copyright

if

20U6

Svmantec

Corporauor-

Allnghl':'

rcservec

5.0 for UNIX

Fundamentals

eLl
vxdg

-n

namedgl

deport

namedg

You receive an error messageindicating that the volumes in the disk group
are in USI'.
2

Observe the contents of the / dev/vx/ rdsk and / dev/vx/ dsk directories
and their subdirectories. What do you see'!
ls

-lR

/dev/vx/rdsk

This directory contains a subdirectory for each imported disk group,


which contains the character devices for the volumes in that disk group.
ls

-lR

/dev/vx/dsk

This directory contains a subdirectory for each imported disk group,


which contains the block devices for the volumes in that disk group.
3

Unmount all the mounted fi lc systems in narnedg disk group.

YEA
a

Select the File Systems node in the object tree and highlight the file
systems you want to unmount in the right pane view.

Select Actions->Unmount

Confirm when prompted.

File System.

eLl

umount

/namel

umount

/name2

Rename the narnedg disk group to narnedgl. Do not forget to start the
volumes in the disk group after the renaming if you are using the command
line interface.

YEA
a

Highlight the namedg disk group, and select Actions->Rename


Group.

Type in the new name and click OK.

Disk

eLl
vxdg

-n

vxdg

import

Lab 5 Solutions

namedgl

deport

namedg

namedgl

Making Basic Configuration


Copyright

'Z: 2006

B-57

Changes

Symantec

Corporation.

All righl~

rcservoc

vxvol
5

-g namedgl

startall

Observe the contents of the / dev/vx/ rdsk and / dev/vx/ dsk directories
and their subdirectories. What has changed'?

Is -lR /dev/vx/rdsk
Is -lR /dev/vx/dsk
The device subdirectories are rebuilt with the new name of the disk group.
6

Observe the disk media names. Is there any change'!

VEA
Select namedgl

in the object tree and click the Disks tab.

Observe the Internal name column. There should be no change in disk


media names.

CLl
vxdisk
-0 al1dgs
list
vxprint
-g namedgl
-htr
There should be no change in disk media names.
7

Mount the / narnel and / narne2 file systems, and observe their contents.

VEA
For each volume:
a

Highlight the volume, and select Actions->File


System.

System->Mount

File

Type the Mount point and unselect the Add to file system table option.

Click OK to complete.

CLl
mount -F vxfs /dev/vx/dsk/namedgl/namevol1
mount -F vxfs /dev/vx/dsk/namedgl/namevo12
Note: On Linux, usemount - t.
Is -1 /namel
Is -1 /llame2

8-58

VERITAS
Copynghl,:

20[10 S,mantec

Storage
Corpor

auor-

Foundation
All flghls

reserved

/namel
/name2

5.0 for UNIX. Fundamentals

Moving Data Between Systems


You can complete this exercise using either the VEA or the CLI interlace.
Solutions arc provided for both.
Note: If you are sharing a disk array. each participant should make sure that the
prefix used for object names is unique.
Copy new data to the /namel and /name2file systems. For example. copy
the jete/hosts
file to /namel and the jete/group file to /name2.

ep

tete/hosts

/namel

ep

jete/group

/name2

View all the disk devices on the system.

VEA
Select the Disks node in the object tree and observe the disks in the right
pane. Notc thc Status column.

CLI
vxdisk
3

-0

alldgs

list

Unmount all file systems in the namedgl disk group and deport the disk
group. Do not give it a new owner. View all the disk devices on the system.

VEA
a

Select the File Systems node in the object tree and highlight
systems you want to unmount.

Select Actions->Unmount

Confirm

Select the disk group and select Actions->Deport

the file

File System.

when prompted.
Disk Group.

Click

OK.
e

Confirm your request when prompted


dialog box.

in the Deport Disk Group

Select the Disks node in the object tree and observe the disks in the
right pane. Note the change in the Status column.

CLI
umount
umount
vxdg

/name2
deport

vxdisk

Lab 5 Solutions:

/namel

-0

namedgl
alldgs

list

Making Basic Configuration


Copynght

(f; 2006

B-59

Changes

Symantec

Corporal

Ion

All rights

reserved

Identify the name ofthe system that is sharing access to the same disks as your
system. I f you are not sure, check with your instructor. Note the name of the
partner system here.
Partner system hostname:

Using the command line interface. pcrfonn


system:

the following

steps on your partner

Note: If you arc working on a standalone system. skip step a in the following
and use your own system as the partner system.
a

Remote login to the partner system,

rlogin
b

partner_system_hostname

Import the narnedgl disk group on the partner system. start the volumes in
the imported disk group, and view all the disk devices on the system.
On the partner

vxdg

import

vxvo1

-g

vxdisk
c

system:

namedgl

namedgl

-0

al1dgs

starta11
list

While still logged in to the partner system, mount the / narnel and
/ name2 file systems. Note that you will need to create the mount
directories on the partner system before mounting the file systems. Observe
the data in the tile systems.
On the partner

system:

mkdir

/namel

mkdir

/name2

mount

-F vxfs

/dev/vx/dsk/namedgl/namevo11

/namel

mount

-F vxfs

/dev/vx/dsk/namedgl/namevo12

/name2

Note: On Linux,

Is

-1

/namel

Is

-1

/name2

use moun t

- t.

The data should be the same as it was on your own system.


d

Unmount thc file systems on your partner system.


On the partner

/namel

umount

/name2

On your partner system. deport narnedgl and assign your own machine
name, tor example, trainS, as the New host.
On the partner

B-60

system:

umount

system:

VERITAS

Storage Foundation

Cop/ughl G: 2006 Symantoc Corporation. All nqtus reserved

5.0 for UNIX: Fundamentals

vxdg

-h your_system_name deport

namedgl

Exit from the partner system.

Type Exit.
6

On your own system imparl the disk group and change its name back to
View all the disk devices on the system.

namedg.

VEA
a

Select the disk ~roup under the Disk Groups node and select
Actions->Import
Disk Group.

In the Import Disk Group dialog box, type namedg in the New name
field, verify that the "Start all volumes" option is checked, and click
OK.

Select the Disks node in the object tree and observe the disks in the
right pane. The status should change to Imported.
CLI

vxdg -n namedg import


namedgl
vxvol
-g namedg startall
vxdisk
-0 alldgs
list
7

Deport the disk group namedg by assigning the ownership to anotherhost.


View all the disk devices on the system. Why would you do this')

VA
a

Select the disk group under the Disk Groups node and select
Actions->Deport
Disk Group.

In the Deport Disk Group dialog box, check Deport options, and type
anotherhost
in the New host field.
Click OK and confirm when prompted.

In the list of disks, the status of the disks in the deported disk group is
displayed as Foreign.
You would do this to ensure that the disks are not imported accidentally
by any system other than the one whose name you assigned to the disks.
CLI
vxdg -h anotherhost
deport
vxdisk
-0 alldgs
list

Lab 5 Solutions:

Making Basic Configuration

namedg

Changes

Copyright 2006 Svmantec Corporation All rights reserved.

B-61

From the command line display detailed information about one of the disks in
the disk group using the vxdisk
list
device_tag command. Note the
hostid field in the output.
vxdisk

list

tag

device

where device_tagis
c#t#d# for Solaris and HP-UX, hdisk#
and sd# for Linux platforms.
9

for AIX

Import namedg. Were you successful'?

VEA
a

Selectthe disk group and selectActions->Import

In the Import Disk Group dialog box, click OK.

Disk Group.

This operation should fail, becausenamedg belongsto another host.


CLI
vxdg import

namedg

This operation should fail, becausenamedg belongsto another host.


10 Now import namedg and overwrite the disk group lock. What did you have to
do to import it and why"

VEA
a Selectthe disk group and selectActions->Import

Disk Group.

In the Import Disk Group dialog box, mark the Clear host ID check
box, verify that the "Start all volumes" option is checked, and click
OK.

Confirm when prompted.


CLI

vxdg
vxvol

-c

import namedg
-g namedg startall

11 From the command line display detailed information about the same disk in the
disk group as you did in step 8 using the vxdi sk 1i s t devi ce_ tag
command. Note the change in the hostid field in the output.
vxdisk list
device
tag
where device_
tag is c#t#d# for Solaris and HP-UX, hdisk#
and sd# for Linux platforms.

B-62

VERITAS

Storage

Foundation

Copvnqnt '& 2006 Svmantec Corporauon. All rights reserved

for AIX

5.0 for UNIX: Fundamentals

12 Remove all of the volumes in the namedg disk group.

VEA
a
b
c

Selectthe namedg disk group and click the Volumestab in the right
pane view.
Highlight all volumesin the window.
SelectActions->Dclcte Volume.

Click Yes To All.

eLl
vxassist

-g

namedg

remove

volume

namevoll

vxassist

-g

namedg

remove

volume

namevo12

Lab 5 Solutions:

Making Basic Configuration

B-63

Changes

Copyright If: 2006 Swnantec Corrorauon. All

rights

reserved

Preparation for Defragmenting

a Veritas File System Lab

A lab exercise in the next lesson requires that you run a script that sets up tiles with
different size extents. Because the script can take a long time to run, you may want
to begin running the script now, so that the necessary environment is created by the
ncx t lab time.
Identify the device tag for the second internal disk on your lab system. If you
do not have a second internal disk or if you cannot use the second internal disk,
use one of the external disks allocated to you.
Second internal disk (or the external disk used in this lab):
2

Initialize the second internal disk (or the external disk used in this lab) using a
non-CDS disk formal.
Solaris,

vxdisksetup

Linux,
AIX

where device_tag

II P-LJX

-i

device_tag

format=sliced

is c#t#d# for Solaris.

Note: Check the status of the second internal disk using the vxdisk
1 i st command. I f the disk is displayed as an LVM disk. ensure that
it is not used by any active LVM volume groups and take it out of
LVM control using the pvremove command. I r the pvremove
command Jails due to an exported volume group information left on
the disk. re-create an LVM header using the force option
(pvcreate
-f /dev/rdsk/device_name)
before using the
pvremove command to remove it.
vxdisk

list

If necessary:
vgdisplay
pvcreate
pvremove

-v /dev/vgOO
-f /dev/rdsk/device_tag
/dev/rdsk/device
tag

where dev i ce_ tag is the device name of the second internal
disk in the format c#t#d#.
vxdctl
vxdisk

enable
list

vxdisksetup

-i

where device_

tag

format=hpdisk

Create a non-cds disk group called testdg using the disk you initialized
step 2.
vxdg

init

where device_

8-64

device

tag is c#t#d# for HP-UX.

testdg
tag

testdgOl=device_tag

is c#t#d#

in

cds=off

fur Sularis and HP-UX.

VERITAS Storage Foundation


CoPVrtghl : 2000 Svmanlec Corporauoo

All nghts

reserved

5.0 for UNIX: Fundamentals

In the testdg
disk group create a 1-(i8 concatenated volume called
testvol
initializing the volume space with zeros using the Ln i t z e ro
option to vxassist.
e

vxassist

-g

testdg

testvol

Create a VxFS file system on testvol


mkfs

-F

vxfs

mkdir

/fs

mount

-F

19

init=zero

and mount it on /fs_test.

/dev/vx/rdsk/testdg/testvo1

Note: On Linux,

usemkfs

-to

test
vxfs

/dev/vx/dsk/testdg/testvol

Note: On Llnux,

make

use mount

/fs_test

-to

Ask your instructor for the location of the extents.


extents.
sh script.

sh script. Run the

Note: This script can take about 15 minutes to run.


/student/labs/sf/sf50/extents.sh

Verify that the VRTSspt software is already installed on your system. Ifnot,
ask your instructor for the location of the software and install it.
Note: Before Storage Foundation 5.0. the VRTSspt software was provided as
a separate support utility that needed to be installed by the user. With 5.0. this
software is installed as part of the product installation.
Sularis,
Linux,
AIX

pkginfo

HP-liX

sw1ist

grep

VRTSspt

-1 product

grep

Ensure that the directory where the vxbench


in your PATH definition.
echo

$PATH

grep

-i

VRTSspt

command is located is included

vxbench

If necessary:
export

Lab 5 Solutions:

PATH=$PATH:/opt/VRTSspt/FS/VxBench

Making Basic Configuration


Copyright?:,

2006

B-65

Changes

Syrnantec

Corporation.

All IIghl<;

reserved

B-66

VERITAS
Coovnqht:';

2006

Syrw}i\\'-'C

Storage Foundation
Corporatron.

All rights

roverveo

5.0 for UNIX: Fundamentals

,S}lmmh'(.

Lab 6
Lab 6: Administering

File Systems

In this lab, you practice file system


administration, including defragmentation
and administering the file change log,

For Lab Exercises. see Appendix A.


For Lab Solutions, see Appendix B.

Lab 6 Solutions: Administering

File Systems

In this lab. YOIl practice file system administration. including defrngmentation and
administering the file change log.
The Lab Exercises for this lab are located on the following page:

Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this. you also need four external disks and
the second internal disk to be used during the labs. If you do not have a second
internal disk or if you cannot use the second internal disk. you need live external
disks to complete the labs.
At the beginning otthis lab. you should have a disk group called narnedg that has
four external disks and no volumes in it. The second internal disk should be empty
and unused.
Note: If you are working in a North American Mobile Academy lab environment.
you cannot use the second internal disk during the labs. I I' that is the case. select
one or the external disks 10 complete the lab steps.

Lab 6 Solutions:

Administering

8-67

File Systems
Copyriglll

"'; 2006

Symantec

Corporation.

All rights

teservcrt

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference. you may record the information here. or refer
back to the first lab where you initially documented this information.

Object

Sample Value

My Data Disks:

Solaris: clt#dO
clt#d5

Your Value
-

HP-UX: c4tOdO c4tOd5


IIIX: hdisk21hdisk26
l.inux: sda - sdf
2nd Internal Disk:

Solaris: cOt2dO
HP-UX:
IIIX:
Linux:

c3t15dO

hdiskl
hdb

Location of Lab Scripts


(if any):

Istudent/labs/sf/
sf50

Prefix to be used with


object names

name

B-68

VERITAS
Copynght:;:

Storage Foundation

2006 Symantec Corporauon

All fights reserved

5.0 for UNIX: Fundamentals

Preparation for Defragmenting a Veritas File System Lab


Note: If you have already performed these steps at the end ofthe last lab, then you
can skip this section and proceed with Dctragmcnting a Veritas File System
section.
Identify the device tag lor the second internal disk on your lab system. If you
do not have a second internal disk or if you cannot use the second internal disk,
use one of the external disks allocated to you.
Second internal disk (or the external disk used in this lab):~
2

Initialize

the second internal disk (or the external disk used in this lab) using a

non-CIJS disk format


Solaris

vxdisksetup

-i

device_tag

where device_ tag is c #t#d#

format=sliced

for Solaris.

Note: Check the status of the second internal disk using the vxdi sk
list
command. lfthe disk is displayed as an LYM disk. ensure that
it is not used by any active LYM volume groups and take it out or
LYM control using the pvremove command. II' the pvremove
command fails due to an exported volume group information left on
the disk. re-create an LYM header using the torce option
(pvcreate
- f / dev /rdsk/
device_name)
before using the
pvremove command to remove it.

HP-UX

vxdisk

list

If necessary:
vgdisplay
pvcreate
pvremove

-v

/dev/vgOO
/dev/rdsk/device_tag
/dev/rdsk/device_tag
-f

where device_ tag is the device name of the second internal


disk in the format c#t#d#.
vxdctl
vxdisk

enable
list

vxdisksetup

-i

device_tag

where devi ce_ tag is c#t#d#

Create a non-CDS disk group called testdg using the disk y()U initialized
step 2.
vxdg

init

testdg

where device_
4

format=hpdisk

for .IP-UX.

testdgOl=device_tag

tag is c#t#d#

in

cds=off

for Solaris and HP-UX.

In the testdg disk group create a I-GB concatenated volume called


testvol initializing the volume space with zeros using the init=zero
option to vxassist.

Lab 6 Solutions: Administering File Systems


Copynght ~ 2006 Symantec Corporation All rights reserved

8-69

vxassist
5

-g testdg

make testvo1

Create a VxFS tile system on testvol

19 init=zero

and mount

II

on /fs

test.

mkfs -F vxfs
/dev/vx/rdsk/testdg/testvo1
Note: On Linux, usemkfs - t.
mkdir /fs
test
mount -F vxfs /dev/vx/dsk/testdg/testvo1
Note: On Linux, usemount - t.
6

Ask your instructor for the location of the extents.


exten ts . sh script.

/s

test

sh script. Run the

Note: This script can take about 15 minutes to run.


/student/1abs/sf/sf50/extents.sh
7

Verify that the VRTSspt software is already installed on your system. Ifnot,
ask your instructor for the location of the software and install it.
Note: Before Storage Foundation S.O,the VRTSspt software was provided as
a separate support utility that needed to be installed by the user. With 5.0. this
software is installed as part of the product installation.

I grep VRTSspt

pkginfo
sw1ist

-1 product

I grep VRTSspt

Ensure that the directory where the vxbench command is located is included
in your PATH definition.
echo $PATH I grep
If necessary:
export

-i

vxbench

PATH=$PATH:/opt/VRTSspt/FS/VxBench

B-70

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copynght

C' 2006

Svmar-tec

Corporation

All rights

reserved

Defragmenting a Veritas File System


The purpose of this section is to examine the structure of a fragmented and an
unfragmented file system and compare the file system's throughput in each case.
The general steps in this exercise are:
Make and mount a file system
Examine the structure of the new file system for extents allocated
Then examine a fragmented file system and report the degree of fragmentation
in the file system
Use a support utility called vxbench to measure throughput to specific files
within the fragmented file system
De-fragment the file system, reporting the degree or fragmentation
Repeat executing the vxbench utility using identical parameters to measure
throughput to the same files within a relatively un fragmented file system
Compare the total throughput before and after the defragmentation process
In the namedg
namevoll.
vxassist

disk group create a I-GB concatenated volume called

-g

namedg

namevoll

Create a VxFS file system on namevoll


mkfs

-F

vxfs

-F

19

and mount it on / namel.

/dev/vx/rdsk/namedg/namevoll

Note: On Linux,
mount

vxfs

use mkf s

- t..

/dev/vx/dsk/namedg/namevoll

Note: On Linux,

make

use moun t

/namel

- t.

Run a fragmentation report on / namel to analyze directory and extent


fragmentation. Is a newly created, empty file system considered fragmented"
In the report. what percentages indicate a file system's fragmentation?
fsadm

-D

-E

Directory

/namel

Fragmentation

Report

Dirs

Total

Searched

Blocks
2

total
Extent

blocks

Lab 6 Solutions:

Free

Dirs

Fragmentation

Immeds

Dirs

to

Reduce

Add

Average

Files

File

Blks

# Extents

used

Administering

for
in

Blocks

to

Reduce
0

Report
Average

blocks

to
0

Total
0

Immed

indirects:

extents

smaller

than

File Systems
Copyrtqht 2006 Symantec Corporation. All rights reserved

Total
Free

Blks

1030827
64 blks:

0.01

8-71

% Free blocks
in extents
smaller
than
8 blks:
% blks allocated
to extents
64 blks or larger:
Free Extents
By Size
1: 1
2: 1
4:
8: 2
16: 1
32:
64: 1
128: 2
256:
512: 2
1024: 1
2048:
4096: 1
8192: 1
16384:
32768: 1
65536: 1
131072:
262144:
1
524288: 1
1048576:
2097152:
0
4194304:
0
8388608:
16777216 : 0
33554432:
0
67108864:
134217728:
0
268435456:
0
536870912:
1073741824:
0
2147483648:
0
A newly created file system with no files or directories cannot
fragmented.

0.00
0.00
2
2
1
0
0
1
0
0
0
0
be

The following table displays the percentages you should be observing in


the output of the fragmentation report to determine if a tile system with
tiles and directories is fragmented.

Percentage

U nfragmented

Badly
Fragmented

'Yo of Free hlocks in extents smaller


than 64 blocks

< 5'Yo

>50%

% of Free blocks in extents smaller


than 8 blocks

< If}O

> 5'Y.

% blks allocated to extents 64 blks or


larger

> 5'Yo

<5%

What is a fragmented file system?

A fragmented tile system is a tile system where the free space is in


relatively small extents scattered throughout different allocation units
within the tile system.
5

If you were shown the following extent fragmentation report about a file
system. what would you conclude"
Directory

total

8-72

Fragmentation
Dirs
Total
Searched
Blocks
199185

85482

Report
Immed Immeds Dirs to
Dirs to Add Reduce
115118

5407

5473

Blocks
to
Reduce
5655

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYrlghl''- 2006 Svmantec Corporation

All fight:. reseo.eo

A high total in the Dirs to Reduce column indicates that the directories are
not optimized. This file system's directories should be optimized by
directory defragmentation.
6

Unmount I namel and remove namevoll

in the namedg disk group.

umount Iname1
vxassist
-g namedg remove volume

namevol1

Note: The following steps will use the Ifs_test tile system to analyze the
impact of fragmentation on the tile system performance. Verily that the
extents. sh script has completed before you continue with the rest of this lab.
7 Run a fragmentation report on Ifs test to analyze directory and extent
fragmentation. Is Ifs_test fragmented? Why or why not'! What should be
done')
fsadm -D -E Ifs
test
Directory
Fragmentation
Report
Dirs
Total
Immed Immeds Dirs to Blocks to
Searched
Blocks Dirs to Add Reduce
Reduce
o
o
2
2
1
o
total
Extent

Fragmentation
Report
Average
Total
Total
Average
Free Blks
# Extents
Files
File Blks
641
750037
55
5102
blocks
used for indirects:
640
% Free blocks
in extents
smaller
than 64 blks:
33.44
% Free blocks
in extents
smaller
than
8 blks:
18.89
% blks allocated
to extents
64 blks or larger:
42.07
By Size
Free Extents
1: 16891
2 : 11505
4: 25446
16: 1384
32: 2
8 : 10868
64:
0
128: 0
256: 10
512:
0
1024: 1
2048: 0
4096:
1
8192: 0
16384: 0
131072 : 1
32768:
1
65536: 1
262144:
1
524288: 0
1048576: 0
2097152:
0
4194304: 0
8388608: 0
16777216:
33554432: 0
0
67108864: 0
134217728:
0
268435456:
536870912:
0
0
1073741824:
2147483648 : 0
0
Dirs to Reduce column is O.Therefore, the directories do not need to he
optimized. But the extents need to be optimized. Because:

Lab 6 Solutions:

Administering

8-73

File Systems
Copyright

r{; 2006

SYll13nt0C

Corpo'8110II

All rights

reserved

'YoFree blocks in extents smaller than 64 blks: 33.44 50%) - OK


% Free blocks in extents smaller than 8 blks: 18.89 (>51.,)- Not OK
%, blks allocated to extents 64 blks or larger: 42.07 (>5%) - OK
Therefore, the file system's extents should be defragmented.
8

Lise the Is

-Te command to display the extent attributes of the files in the


Ifs_test
tile system. Note that on the Solaris platform you need to use the
Is command provided by the YxFS file system software to be able to use the

-e option.
/usr/lib/fs/vxfs/bin/ls
Is

-Ie

-Ie

/fs

test

/fs_test

-rw-r--r-ext 2

1 root

other

2048000

Jul

14 17:57

test42

ores

-rw-r--r-ext 4

1 root

other

4096000

Jul

14 17:57

test44

ores

-rw-r--r-ext 6
-rw-r--r-ext 8
-rw-r--r--rw-r--r-ext 2

1 root

other

6144000

Jul

14 17:57

test46

ores

1 root

other

8192000

Jul

14 17:57

test48

:res

1 root
1 root

other
other

8192000
2048000

Jul
14 17:57
test 50
Jul 14 17:57
test52
ores

-rw-r--r-ext 4

1 root

other

4096000

Jul

14 17:57

test54

ores

-rw-r--r-ext 6

1 root

other

6144000

Jul

14 17:57

test56

ores

-rw-r--r-ext 8000

1 root

other

8192000

Jul

14 17:57

test58

:res

Two files that will be used in performance tests have been highlighted in
the sample output provided here.
9

Measure the sequential read throughput to a particular tile. for example, an


RMB tile on an RK extent (for example, /fs_test/test48),
in a
fragmented tile system using the vxbench
utility and record the results. Use
an RK sequential I/O size.
Notes:
You need to use the vxbench
utility that is appropriate for the platform
you are working on, for example vxbench_9
on Solaris 9. To identify the
appropriate vxbench
command, use the Is -1 I opt/VRTSspt
/FS /
VxBench
command. I f this path is not in your PATH environment
variable, use the fullparh of the command while running the corresponding
vxbench
utility.

8-74

VERITAS

Storage

Foundation

Copynqh! r 2006 Symautec Corporation All rights reserved

5.0 for UNIX' Fundamentals

Remount the file system before running each I/O test.


Solaris

mount -F vxfs
-0 remount
/dev/vx/dsk/testdg/testvol

\
/fs

test

-w read
/opt/VRTSspt/FS/VxBench/vxbench_9
iosize=8k,iocount=1000
/fs - test/test48

HI'-lJX

mount -F vxfs
-0 remount
/dev/vx/dsk/testdg/testvol

\
/fs - test

is provided

total:

sec

7.147

/opt/VRTSspt/FS/VxBench/vxbench_ll.23_pa64
-w read
-i iosize=8k,iocount=1000
/fs .- test/test48
A sample output

-i

here as an example:

1119.40

KB/s

cpu:

0.12

sys

0.00

user

10 Repeat the same test for an 8Mb file on an 8Mb extent (for example. using the
Ifs_test/test58
file). Note that the lile system must be remounted
between the tests. Can you explain why?
The file system must be remounted
Solaris

to clear the read buffers.

mount -F vxfs
-0 remount
/dev/vx/dsk/testdg/testvol

\
/fs - test
9 -w read
Ifs - test/test58

lopt/VRTSspt/FS/VxBench/vxbench
iosize=8k,iocount=1000
HI'LJX

mount -F vxfs
-0 remount
/dev/vx/dsk/testdg/testvol

\
Ifs

- test

10pt/VRTSspt/FS/VxBench/vxbench_ll.
-w read
-i iosize=8k,iocount=1000
Ifs - test/test58
A sample output

is provided

total:

sec

0.206

-i

23 _pa64

here as an example:

38911.83

KB/s

cpu:

0.17

sys

0.01

user

11 Defragment If s _ t es t and gather summary statistics after each pass through


the file system, After the dcfragrnentation completes. determine if I f s _ test
is fragmented? Why or why not?
Note: The dctragmcntation
fsadm
Extent

-e

-E

-s

Ifs

can take about 5 minutes to complete.


test

Fragmentation

Report

Total

Average

Average

Total

Files

File

# Extents

Free

55

Lab 6 Sotutions:

Blks
5102
indirects:

Blks

750037

blocks

used

% Free
% Free

blocks

in

extents

smaller

than

64 blks:

33.44

blocks

in

extents

smaller

than

8 blks:

18.89

Administering

for

641
640

File Systems
Copyright ~ 2006 Symantoc COrporation All rights reserved

8-75

% blks allocated
to extents
64 blks or larger:
42.07
Free Extents
By Size
1: 16891
2: 11505
4:
25446
8: 10868
16: 1384
32:
2
64: 0
128: 0
256:
10
512: 0
1024: 1
2048:
0
4096: 1
8192: 0
16384:
0
32768: 1
65536: 1
131072:
1
262144: 1
524288: 0
1048576:
0
2097152:
0
4194304:
0
8388608:
0
16777216:
0
33554432:
0
67108864:
0
134217728 : 0
268435456:
0
536870912:
0
1073741824:
0
2147483648:
0
Pass 1 Statistics
Extents
Reallocations
Searched
Attempted
total
35210
16151
Pass 2 Statistics
Extents
Reallocations
Searched
Attempted
total
18296
8643

Ioctls
Issued
45

Ioctls
Issued
33

FileBusy
0

Errors
NoSpace Total
0
0

FileBusy
33

Errors
NoSpace Total
0
33

Extent

Fragmentation
Report
Total
Average
Average
Total
Files
File
Blks
# Extents
Free Blks
55
2833
333
744605
blocks
used for indirects:
608
% Free blocks
in extents
smaller
than 64 b1ks: 8.89
% Free blocks
in extents
smaller
than
8 blks:
0.93
% blks allocated
to extents
64 blks or larger:
46.94
Free Extents
By Size
1: 2173
2:
38
4: 1161
8: 1122
16: 1104
32: 1021
64:
994
128:
989
256: 605
512:
2048: 0
5
1024:
3
4096:
0
8192:
0
16384: 0
32768:
1
131072: 0
0
65536:
262144:
1048576: 0
1
524288:
0
2097152 :
0
0
4194304:
8388608: 0
16777216:
67108864 : 0
0
33554432:
0
536870912:
134217728:
0
268435456:
0
0

8-76

VERITAS
C()P~flgtH

~_-200n

Svmantec

Storage Foundation
Corporauon

All fights

wspr'/<:'d

5.0 for UNIX: Fundamentals

1073741824:

2147483648:

The file system no longer needsto he defragmentcd, because:


%, Free blocks in extents smaller than 64 blks: 8.89 50'10) - OK
(much better than before)
'Yt,Free blocks in extents smaller than 8 blks: 0.93 I %) - OK (milch
better than before)
01., blks allocated to extents 64 blks or larger: 46.94 (>5%) - OK
(slightl~' better than before)

12 Measure the throughput of the untragmented file system using the vxbench
utility on the same files as you did in steps 9 and 10, Is there any change in
throughput"
Notes:
You need to use the vxbench utility that is appropriate for the platform
you are working on. for example vxbench_9 on Solaris 9, To identify the
appropriate vxbench command. use the 1 s -1 / opt /VRTSs pt / FS/
VxBench command. If this path is not in your PATH environment
variable. use the tullpath of the command while running the corresponding
vxbench utility.
The file system must be remounted before each test to clear the read
buffers.
If you have used external shared disks on a disk array used by other
systems for this lab. the performance results may be impacted by the disk
array cache and may not provide a valid comparison between a fragmented
and defragmented file system.

Solaris

-0 remount
mount -F vxfs
/dev/vx/dsk/testdg/testvol

/fs - test

/opt/VRTSspt/FS/VxBeneh/vxbeneh
iosize=8k.ioeount=1000
HP-UX

/fs

9 -w read

-i

test/test48
--

-0 remount
mount -F vxfs
/dev/vx/dsk/testdg/testvol

/fs - test

/opt/VRTSspt/FS/VxBeneh/vxbeneh_11.23_pa64
-w read
-i iosize=8k.ioeount=1000
/fs - test/test48

\
\

A sample output is provided here as an example:


total:

0.241

see

33187.31

KB/s

epu:

Lab 6 Solutions: Administering File Systems


Copwiqbt

D 2U06 Symantec

0.13

sys

0.01

user

8-77
Corporation.

Alillghl5

reserved

Solaris

UP-UX

mount -F vxfs -0 remount \


/dev/vx/dsk/testdg/testvol
/opt/VRTSspt/FS/VxBeneh/vxbeneh
iosize=8k,ioeount=1000
/fs

/fs

test
9
-w read -i
-- test/test58

mount -F vxfs -0 remount \


/dev/vx/dsk/testdg/testvol
/fs - test
11.23 _pa64 \
/Opt/VRTSspt/FS/VxBeneh/vxbeneh
-w read -i
iosize=8k,ioeount=1000
\
/fs test/test58
-

A sample output is provided here as an example:


total:
0.202 see 39650.48 KB/s epu: 0.18 sys 0.00 user
There is an improvement in throughput for both casesbut the
improvement is highest for the file using small extent sizes (that is for
Ifs test/test48).
13 What is the difference between an un fragmented and a fragmented file system'?
A fragmented tile system has free space scattered throughout the file
system in relatively small extents whereas an unfragmented tile system has
free space in just a few relatively large extents.

14 Is any onc environment more prone to needing dcfragmcntation

than another'?

Yes,volatile environments wherein files are grown, shrunk, erased,


moved, with ownership changes, and so on are prone to fragmentation.
Stable environments, such as Oracle databases and logs, have very little
impact on the supporting tile system so require infrequent
dcfragmentation.

Reading the File Change Log (FCL)


In the namedg disk group create a new 10-1\113 volume called namevoll.
Create a YxFS tile system on namevoll
and mount it on /fel_test.
vxassist
-g namedg make namevol1 10m
mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll
Note: On Llnux, use mkfs - t.
mkdir /fel
test
mount -F vxfs /dev/vx/dsk/namedg/namevoll
Note: On Linux, usemount - t.
2
B-78

Turn the FCL on for /fel

test.

/fel

test

and ensure that it is on.

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

feladm

on Ifel

feladm

state

test
Ifel

test

ON

(10

to the directory that contains the FCL.

ed Ifel

test/lost+found

Is
ehangelog
4

feladm
5

print

0 Ifel_test

l low do you know that there have been no changes in the file system yet?
The superblock (fo) and the end of the FCL file (loff)
number.

Display the superblock for /fcl_test.

Add some tiles to /fcl

test.

are the same

Then remove one of the tiles youjust added.

ed Ifel
test
toueh a b e
rmb
7

Display the superblock for /fcl_test.


feladm

print

0 Ifel

Ilow do you know that changes have been made to the file system"
The superblock (foff)
numbers.

test

and the end of the FCL file (10) are different

Print the number of the FCL.


feladm print
1024 Ifel_test
The fields are Change Type, Inode Number, Inode Generation, and
Timestamp
The Unlink and Rename types list the name of the file on the following
line, preceded by the parent's inode number.

10 Which tiles are identified by the inode numbers that are listed in the Create
type?
vxlsino
inode_number
Ifel
test
11 Unmount the fel_test
cd I
umount

Lab 6 Solutions:

tile system and remove namevoll.

Ifcl_test

Administering

8-79

File Systems
Copyrighl.:t 2006 Svmanter; Corporation

All rights reserved

vxassist

-g

nallledg

remove

volume

namevoll

12 The next two lab sections are optional labs on analyzing and defragmeming

fragmented file systems. If you are not planning to carry out the optional labs,
unmount Ifs_test tile system ami destroy the testdg disk group:
otherwise. skip this step.
umount
vxdg

Ifs
destroy

test
testdg

B-80

VERITAS
Copyright

if 2006

Symanrac

Storage
Corporation

Foundation

All nqbts

reserved

5.0 for UNIX: Fundamentals

Optional Lab Exercises


The next set of lab exercises is optional and may be performed i I' you have time.
These exercises provide additional practice in defragmenting a file system and
monitoring fragmentation.

Optional Lab: Defragmenting a Veritas File System


This section uses the Ifs_test
file system to analyze the impact of
fragmentation on the performance of a variety of 110 types on files using small and
large extent sizes.
Recreate the fragmented Ifs_test
file system using the following steps:
a

Unmount the Ifs


UllIount

Ifs

test

file system

test

Recreate a vx ts file system in the testvol


in testdg.
mkfs -F vxfs
/dev/vx/rdsk/testdg/testvol
Note: On Linux, usemkf s - t.

Mount the file system to /fs_test.


mount -F vxfs
/dev/vx/dsk/testdg/testvol
Note: On Liuux, usemount

/fs

test

-to

Ask your instructor for the location of the extents.


extent s. sh script.

sh script. Run the

Note: This script can take about I:; minutes to run.


/student/labs/sf/sf50/extents.sh
2

Run a series of performance tests for a variety of 1i0 types using the vxbench
utility to compare the performance of the Illes with the XK extent size
(/fs_test/test48)
and the ROOOK
extent size (/fs_test/test58)
by
performing the following steps.
Complete the following table when doing the performance tests.

Lab 6 Solutions:

Administering

8-81

File Systems
Copyright

(i;

2006

Symantec

Corporation

All rights

reservec

Test Type

Time (seconds)
Before
Defrag

Sequential
reads.XK
extent

Throughput (KB/second)

After Defrag

Before
Defrag

After Defrag

2.709

.526

2953.22

15202.10

.547

.549

14634.57

14576.20

Random
reads.RK
extent

8.268

6.267

967.54

1276.53

Random
reads.8UOUK
extent

6.541

6.468

1223.02

1236.91

Sequential
reads.8000K
extent

Note: Results can vary depending on the nature of the data and the model of
array used. No performance guarantees arc implied by this lab.
3

Ensure that the directory where the vxbench utility is located is included in
your PATH definition.
export

PATH=$PATH:/opt/VRTSspt/FS/VxBench

Sequential 1/0 Test


Note: You must unmount and remount the file system Ifs_test
step to clear and initialize the buffer cache.

before each

To test the 8K extent size:


mount -F vxfs
-0 remount
/fs
test
Note: On Linux, use mount
vxbench_platform
/fs test/test48

-w read

/dev/vx/dsk/testdg/testvol

-to
-i

iosize=8k,iocount=lOOO

To test the 8000K extent size:


mount
/fs

-F vxfs
test

-0

remount

Note: On Linux, use mount


vxbench_platform
/fs test/test58
5

-w

read

/dev/vx/dsk/testdg/testvol

- t.
-i

iosize=8k,iocount=lOOO

Random 1/0 Test


To test the 8K extent size:

8-82

VERITAS Storage Foundation


Copynyht

20011Svrnantec Corporaurm

AI! fl9h!S rese-veo

5.0 for UNIX: Fundamentals

mount
-F
/fstest

vxfs

-0

remount

Note: On Linux, use mount

/dev/vx/dsk/testdg/testvol

-to

-w rand_read
-i
iosize=8k,iocount=lOOO,maxfilesize=8000
/fs
test/test48

vxbench_platform

\
\

To test the ROOOK extent size:


mount
/fs

-F vxfs

-0

remount

/dev/vx/dsk/testdg/testvol

Note: On Linux, use mount


-to
vxbench-platform
-w rand_read

-i

iosize=8k,iocount=lOOO,maxfilesize=8000
/fs
test/test58

test

Defragment the If s_tes t file system. The defragmenration process takes


some time.
Solaris.
Linux,
AIX

/opt/VRTSvxfs/sbin/fsadm

BI'-LJX

/usr/lbin/fs/vxfs5.0/fsadm

-E

-e

-e

-d

-E

-D

-d

-s

-D -s

/fs

/s

test

test

Repeat the vxbench performance tests and complete the table with these
performance results.

Compare the results of the defragmenrcd tile system with the fragmented file
system.

When finished comparing the results in the previous step. unmount the
Ifs_test
file system and destroy the testdg disk group.
umount
vxdg

/fs_test
destroy

testdg

Optional Lab: Additional

Defragmenting

Practice

In this exercise. you monitor and defragment a file system by using the f sadm
command.
Create a new 2-GB striped volume called namevoll in namedg disk group.
Create a VxFS file system on namevoll and mount it on Ifs_test.
vxassist
-g namedg make namevoll
2g layout=stripe
mkfs

Lab 6 Solutions:

-F

vxfs

Administering

Idev/vx/rdsk/namedg/namevoll

File Systems
Copyright 2006 Symantec Coepo.auon All dqtlls recerven

B-83

Note: On Linux, use mkfs


mkdir

/ f s _ tes t

mount

-F

vxfs

- t.

(if the directory

Note: On Linux, use mount

Ifs

test

-to

Repeatedly copy a small existing file system to If s_tes t using a new target
directory name each time until the target tile system is approximately 85
percent full. Fur example, 011 the Solaris platform:
for i in 1 2 3
> do
> cp -r lopt
Ifs test/opt$i
> done
Note: Monitor the tile system size using df - k on the Solaris platform and
bdf on the HP-UX platform, and CTRL-C out of the for loop when the file
system becomes approximately 85 percent full.

Delete all files in the Ifs_test

tile system over 10 MB ill size.

find

Ifs

test

-size

+20480b

find

Ifs

test

-size

+20480

Check the level offragmentation


fsadm

-D

-E

/fs

ill the Ifs_test

-exec
-exec

rm {}
rm {}

\;
\;

file system.

test

Repeat steps 2 and .3 using values 4 5 fur i ill the loop. Fragmentation of
both free space and directories will result.

Repeat step :2 using values 6 7 for i. Then delete all tiles that are smaller
than 6.:1K to release a reasonable amount of space.

find

Ifs

test

-type

-size

-64k

-exec

rm {}

\;

find

Ifs

test

-type

-size

-128

-exec

rm {}

\;

Defragment the file system and display the results. Run fragmentation reports
both before and after the defragmentation and display summary statistics after
each pass. Compare the f sadrn report from step 4 with thc final report from
the last pass in this step.
fsadm

-e

-E

-d

-D

-s

/fs

test

Unmount the If s_test file system and remove the namevoll volume used
ill this lab.
umount
vxassist

B-84

docs not already exist)

/dev/vx/dsk/namedg/narnevoll

Ifs

test
-g

namedg

remove

VERITAS

volume

Storage

Copvnqht 'i:. ~006 Symantec CorPOf?'1, '

namevol1

Foundation

5.0 for UNIX. Fundamentals

'S)11U1I1rCC

Lab 7
Lab 7: Resolving Hardware Problems
In this lab, you practice recovering from a
variety of hardware failure scenarios, resulting
in disabled disk groups and failed disks.

First you recover a temporarily disabled disk


group, and then you use a set of interactive
lab scripts to investigate and practice
recovery techniques.

For Lab Exercises,


For J::ab Solutions,

Lab 7 Solutions:

see Appendix
see Appendix

A.
B.

Resolving Hardware Problems

In this lab. you practice recovering from a variety of hardware failure scenarios.
resulting in disabled disk groups and railed disks. First you recover a temporarily
disabled disk group and then you use a set of interactive lab scripts to investigate
and practice recovery techniques. Each interactive lab script:
Sets up the required volumes
Simulates and describes a failure scenario
Prompts you to fix the problem
Finally. a set of uptiona I labs are provided to enable you to investigate disk failures
further and to understand the behavior of spare disks and hot relocation.
The Lab Exercises for this lab are located on the following page:

Prerequisite Setup
To perform this lab. you need a lab system with Storage Foundation pre-installed.
configured and licensed. In addition to this. you also need four external disks to be
used during the labs.
At the beginning of this lab. you should have a disk group called namedg that has
four external disks and no volumes in it.

Lab 7 Solutions:

Resolving

Hardware
Copyrtght

Problems
if;

2006 Symantec Corporation. All rights reserved

8-85

Classroom Lab Values


In preparation for this lab, you will need the following information about your lab
environment. For your reference, you may record the information here, or refer
back to the first lab where you initially documented this information.

Object

Sample Value

:\ly Data Disks:

Sularis: c It #dO clt#d5

Your Value

HP-UX: c4tOdO
c4tOd5
AIX:hdisk21hdisk26
Linux:

8-86

sda

Location of Lab Scripts:

!student!labs!sf!
sf50

Prefix to be used with


object names

name

VERITAS

sdf

Storage

Foundation

Copvnqht G;2006 Symaotec Corporation All rights reserved

5.0 for UNIX: Fundamentals

Recovering a Temporarily Disabled Disk Group


Remove all disks except for one (namedgOl)

vxdg

-g

namedg

rmdisk

namedg04

vxdg
vxdg

-g
-g

namedg
namedg

rmdisk
rmdisk

namedg03
namedg02

Create a I g volume called namevoll


vxassist

namedg

make

vxfs

-F

mkdir

/namel

mount

-F vxfs

-r

Is

-IR

disk group.

19

and mount it to / namel.

/dev/vx/rdsk/namedg/namevoll
/dev/vx/dsk/namedg/namevoll

jete/default

/namel

t directory to / namel

and display the

/namel

-~
...

l ~.

.J

in namedg

Copy the contents of / etc/ defaul


contents of the file system.
ep

C~~\:

namevoll

Create a file system on namevoll


mkfs

-g

from the namedg dis~ g;P.

/~n~a~n~le~ll-

----------

Ask your instructor for the location of the fa i 1 dg_ temp script, and note the
location here:
Script location:
.
_

Start writing to a file in the / namel


following command:
dd

if=/dev/zero
count=500000

file system at the background using the

of=/namel/testfile

bs=1024

&

In one terminal change to the directory containing the script and before the I/O
completes. execute faildg_temp
namedg command.
Notes:
The faildg_temp
script disables the single path to the disk in the disk
group to simulate a hardware failure. This is just a simulation and not a real
failure: therefore. the operating system will still be able to see the disk after
the failure. The script waits until you are ready with analyzing the failure.
to re-enablc the path to the disk in the disk group.
I I' the 110 you started in step (, completes before you can simulate the
failure. you can start it again to observe the 110 failure.
ed /script_location
./faildg
Disabling

Lab 7 Solutions:

Resolving

temp

namedg

device_tag

Hardware
Copyright

Problems
It, 2006

Symantec

8-87
Corporation.

All tights

reserved

Enter e when you are


enabled:
8

ready

for

the

disks

to be re-

Wait for the 110to fail and in another terminal observe the error displayed in
the system lug.
Solar-is,

tail

-f

/var/adm/messages

tail

-f

/var/adm/syslog/syslog.log

Linux,
AIX
HP-lIX

Use the vxdisk


-0 alldgs
list
and vxdg list
determine the statusof the disk group, and the disk.
vxdisk
-0 alldgs
list
vxdg list

commands to

The disk group should show as disabled and the disk status should change
to online
dgdisabled.
10 What happenedto the file system'!
The till:' system is also disabled.
11 When you are done with analyzing the impact of the failure, change to the
terminal where the fa i ldg_ temp script is waiting and enter "e" to correct
the temporary failure.
Note: In a real failure scenario. after the hardware recovery, you would need to
first verify that the operating system can seethe disks and then verity that
Volume Manager has detected the change in status. If nut, you can force
VxVM to scan the disk by executing the vxdctl
enable command. This
will not be necessaryfor this lab.
On the terminal where the faildg_
Enter e when you are ready
enabled:
e

temp script is waiting:


for the disks
to be re-

12 Assuming that the failure was due to a temporary fiber disconnection and that
the data is still intact. recover the disk group and start the volume. Verity the
disk and disk group status using the vxdi sk - 0 a Ildgs
1 i stand vxdg
1 i s t commands.
umount /namel
vxdg deport
namedg
vxdg import
namedg
vxvol
-g namedg startall
vxdisk
-0 alldgs
list
vxdg list

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

8-88
Copynyht'~

200n

Syn:ilIHC',:

Corporation

All rights

reserved

The disk group should now be enabled and the disk status should change
back to online.
13 Remount the file system and verify that the contents arc still there. Note that
you will need to perform a tile system check before you mount the tile system.
fsck

-F

mount
Is

-IR

vxfs

/dev/vx/rdsk/namedg/namevoll

-F vxfs

/dev/vx/dsk/namedg/namevoll

/narnel

/namel

14 Unmount the tile system and remove namevoll. At the end of this section
you should be left with a namedg disk group with a single disk and three
initialized disks that are free to be used in a new disk group.
umount

/namel

vxassist

Preparation

-g

namedg

remove

volume

namevoll

for Disk Failure Labs

Overview
The following sections use an interactive script to simulate a variety of disk failure
scenarios. Your goal is to recover lrom the problem as described in each scenario.
Use your knowledge of VxVM administration, in addition to the VxVM recovery
tools and concepts described in the lesson. to determine which steps to take to
ensure recovery. Aller you recover the test volumes, the script verifies your
solution and provides you with the result. You succeed when you recover the
volumes without corrupting the data.
For most of the recovery problems, you can use any ofthc VxVM interfaces: the
command line interface, the VERITAS Enterprise Administrator (VEA) graphical
user interface, or the vxdi skadm menu interface. Lab solutions are provided for
only one method. I I' you have questions about recovery using interfaces not
covered in the solutions. see your instructor.
Setup
Due to the way in which the lab scripts work. it is important to set up your
environment as described in this setup section:
I I' your system is set to use enclosure-based naming, then you must turn off
enclosure-based naming before running the lab scripts.
2

Create a disk group named testdg and add three disks (preferably of the
same size) to the disk group. Assign the following disk media names to the
disks: testdgOl. testdg02. and testdg03.
vxdisksetup

-i

device_tag(if"eccssar~')

vxdg
init
testdg
testdgOl=device_tagl
\
testdg02=device_tag2
testdg03=device_tag3

Lab 7 Solutions: Resolving Hardware Problems


Copyright!: 2006 Symantec Corporation All right;; reserved

8-89

Note: If you do not have enough disks, you can destroy disk groups created in
other labs (for example, namedg) in order (0 create the testdg disk group.
3

Before running the automated lab scripts. set the DG environment variable in
your root profile to the name of the test disk group that you are using:
Solaris,
HP-liX

vi / .profile
DG=testdg;

export

DG

Linux

vi /root/.bashrc
DG=testdg;
export

DG

Rerun your profile by logging out and logging back on, or manually running it.
4

Ask your instructor for the location of the lab scripts.

Note: This lab can only be performed on Solaris, HP-UX, and Linux.

Recovering from Temporary Disk Failure


In this lab exercise, a temporary disk failure is simulated. Your goal is to recover
all of the redundant and nonredundant volumes that were on thc failed drive. The
lab script run _ di sks sets up the test volume configuration, simulates a disk
failure, and validates your solution for recovering the volumes. Ask your instructor
for the location of the run_disks
script.
Before You Begin: Ensure that the environment variable DGis set to the name of
the testdg disk group. For example:
DG="testdg"
export
DG
From the directory that contains the lab scripts. run the script run_disks.
select option I. "Turned off drive (temporary failure)":

and

./run_disks
1) Lab 1 - Turned off drive
(temporary
failure)
2) Lab 2 - Power failed
drive
(permanent
failure)
3) Lab 3 - Intermittent
Failures
(system still
ok)
4) Optional
Lab 4 - Intermittent
Failures
(system too
slow)
5) Optional
Lab 5 - Turned off drive
with layered
volume
6) Optional
Lab 6 - Power failed
drive
with layered
volume
x) Exit
Your Choice? 1

8-90

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyright;';' 2006 svroaotec Corporation

All

nqhts reserved

This script sets up two volumes:


tes t 1 with a mirrored layout
test2 with a concatenated layout
Note: If you receive an error message about the / image file system becoming
full during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.

Read the instructions in the lab script window. The script simulates a disk
power-offby saving and overwriting the private region on the drive that is used
by both volumes. Then. when you arc ready to power the disk back on. the
script restores the private region as it was before the failure.

Assume that the failure was temporary. In a second terminal window. attempt
to recover the volumes.
Assume that the drive that was turned off and then back on was cl t2dO
for a Solaris or HP-UX system or sdb for a Linux system (actual device
name will vary by system).
Notc: When performing recovery procedures, run vxprint
and vxdisk
list
oftcn to seewhat is changing after issuing recovery commands:
vxprint
vxdisk

-g
-0

testdg

-htr

alldgs

list

To recover from the temporary failure:


a Ensure that the opcrating system recognizes the device:
Solaris

devfsadm

Note: Becauseyou have not changedthe SCSI location of the


drive. running devfsadm may not be necessary.However,
running

this command

verifies

the existence and validity

disk label. Prior to Solaris 7. you can use drvconf

of the

ig and

disks.
HI'-LJX

insf
Linux

-c disk

ioscan
-e

partprobe

Idev/sdb

Verify that thc operating system recognizes the device:


Solaris

prtvtoc

HP-LJX

ioscan

Idev/rdsk/c1t2dOs2
-fnC

disk

(Verify that the disk is ill


Linux

Lab 7 Solutions:

Resolving

fdisk

Hardware

-1

CLAIMED

statc.)

Idev/sdb

Problems

Copyright f' 2006 Symantec Corporation. All rights reserved

B-91

Force the VxVM


the system:

configuration

daemun

tu reread all of the drives in

vxdctl

enable

Reattach

the device tu the disk media record:

vxreattach
e

Recover the volumes:

vxrecover
Start the nun redundant

vxvol
4

-g testdg

volume:

- start

test2

After YOIl recover the volumes, type e in the lab script window. The script
verifies whether your solution is correct.

Recovering from Permanent Disk Failure


In this lab exercise, a permanent disk failure is simulated. Your goal is to replace
the tailed drive and recover the VOIIlIllCS as needed. The lab script run_disks
sets up the test volume configuration, simulates a disk failure, and validates your
solution for recovering the volumes. Ask your instructor for the location of the
run_disks
script.
Before You Begin: Check to ensure that the environment
name of the testdg disk group:

echo

variable DG is set to the

$DG

IfDG is not set, set it before you continue:

DG="testdg"
export
DG
From the directory that contains the lab scripts, run the script run_disks,
select option 2, "Power tailed drive (permanent failure r":

and

./run_disks
1) Lab 1 - Turned off drive
(temporary
failure)
2) Lab 2 - Power failed
drive
(permanent
failure)
3) Lab 3 - Intermittent
Failures
(system still
ok)
4) Optional
Lab 4 - Intermittent
Failures
(system too
slow)
5) Optional
Lab 5 - Turned off drive
with layered
volume

8-92

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copytlghl~'

20(;6 Svmantec Corporaucn. All rights rescrvoo

6) Optional
volume
x) Exit
Your

Lab

Choice?

6 -

Power

failed

drive

with

layered

This script sets up two volumes:


test

1 with a mirrored layout

with a concatenated layout


Note: If you receive an error message about the / image file system becoming
full during volume setup, ignore the error message. This error will not have
any impact on further lab steps or lab results.
test2

Read the instructions in the lab script window. The script simulates a disk
power-offby saving and overwriting the private region on the drive that is used
by both volumes. The disk is detached by YxYM.

In a second terminal window, replace the permanently failed drive with either a
new disk at the same SCSI location or by another disk at another SCSI
location. Then. recover the volumes.
Assume that the failed disk is testdg02
(clt2dO
for a Solaris or HP-lJX
system or sdb for a Linus system) and the new disk used to replace it is
cl t3dO for a Solaris or H P-lJX system or sdd for a Linux system (actual
device name will vary by system), which is originally uninitialized.
Note: When performing recovery procedures, run vxprint
and vxdisk
list
often to seewhat is changing after issuing recovery commands:
vxprint
vxdisk

-g
-0

testdg
alldgs

-htr
list

To recover from the permanent failure:


a Initialize the new drive:

Lab 7 Solutions:

Solarts,
Hl'-UX

vxdisksetup

-i

clt3dO

Linux

vxdisksetup

-i

sdd

Attach the disk media name (testdg02)

to the new drive:

Solaris.
HP-UX

vxdg

-g

testdg

-k

adddisk

testdg02=clt3dO

Linux

vxdg

-g

testdg

-k

adddisk

testdg02=sdd

Recover the volumes:

Resolving

Hardware

Problems

Copyright ~ 2006 Symanrer. Corporation. All nqt.ts reserved

8-93

vxrecover
d

Start the nonrcdundant volume:


vxvol

-g testdg

-f

start

test2

Alternatively, you can use the vxdiskadm


a Invoke vxdiskadm:
vxdiskadm
b

menu interface:

From the vxdiskadm


main menu, select the option, "Replace a failed
or removed disk." When prompted, select clt3dO for a Solaris or
HP-lJX system or sdd for a Linux system to initialize and replace
testdg02.
Note: If you receive an error while using vxdiskadm
about a
vxprint
operation requiring a disk group, ignure the errur.

Start the nun redundant volume:


vxvol

-g testdg

-f

start

test2

After you recover the volumes, type e in the lab script window. The script
verities whether your solution is correct.

When you have completed this exercise, if the disk device that was originally
used during disk failure simulation is in online invalid state, rcinuialize
the disk to prepare tor later labs.

For example:
vxdisksetup

-i

device_tag

Recovering from Intermittent

Disk Failure (1)

In this lab exercise. intermittent disk failures arc simulated. but the system is still
OK. Your goal is to move data from the failing drive and remove the failing disk.
The lab script run_disks sets up the test volume configuration and validates
your solution for resolving the problem. Ask your instructor for the location of the
run_disks script.

Before You Begin: Check to ensure that the environment


name of the testdg disk group:
echo

variable DG is set to the

$DG

If it is not set, set it before you continue:

DG="testdg"
export
DG

8-94

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copyright '';' 2000 Svmautec Corporation. All nqhts reserved

From the directory that contains the lab scripts. run the script run_disks.
select option 3. "Intermittent Failures (system still ok I":

and

./run_disks
1) Lab 1 - Turned
off
drive
(temporary
failure)
2) Lab 2 - Power failed
drive
(permanent
failure)
3) Lab 3 - Intermittent
Failures
(system
still
ok)
4) Optional
Lab 4 - Intermittent
Failures
(system
too
slow)
5) Optional
Lab 5 - Turned
off
drive
with
layered
volume
6) Optional
Lab 6 - Power failed
drive
with
layered
volume
x) Exit
Your

Choice?

This script sets up two volumes:


test1

with a mirrored layout

test2

with a concatenated layout

Note: If you receive an error message about the / image file system becoming
lull during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. You are in limned that the disk
drive used by both volumes is experiencing intermittent failures that must be
addressed.

In a second terminal window. move the data on the failing disk to another disk.
and remove the failing disk.
Assume that testdg02
(clt2dO
for a Solaris or HP-UX system or sdb
for a Linux system, and with plex tes tl- 0 1 from the mirrored volume
testl)
is the drive experiencing intermittent
problems (actual device
name will vary by system).
Note: When performing
recovery procedures, run vxprint
and vxdisk
list
often to see what is changing after issuing recovery commands:
vxprint
vxdisk

-g
-0

testdg
alldgs

-htr
list

To recover:
a

Set the read policy to read from a preferred plex that is not on the
failing drive before evacuating the disk. This technique prevents
\'xVM from accessing the failing drive during a read, if possible:
vxvol

Lab 7 Solutions:

-g

Resolving

testdg

Hardware
Copyright

rdpol

prefer

testl

8-95

Problems
'9 2006

Symantec

testl-02

Corporation

All fights

reserveo

Evacuate data from the failing drive to one or more other drives by
using the vxdiskadm menu interface. Invoke vxdiskadm:
vxdiskadm

From the vxdiskadm main menu, select the option, "Move volumes
from a disk." Evacuate the volumcs on testdg02 to another disk in
the disk group, such as testdg03.

Remove the failing disk by using the vxdiskadm menu interface.


From the vxdiskadm main menu, select the option, "Remove a disk."
Remove the disk testdg02.

Set the volume read policy back to the original read policy:
vxvol

-g testdg

rdpol

select

testl

Note: In this exercise, you still succeed even if you do not change the read
policy or you do not remove the failing disk after evacuation.
Warning: If the lab is repeated and a disk that has been used as a
replacement disk in a previous lab is now used as a new disk to replace the
failing disk without moving the volumes, the test results may succeed
although they should fail. If this happens, remove the volume called image
in the testdg disk group and re-run the lab.
4

Atier you resolve the problem. type e in the lab script window. The script
verifies whether your solution is correct.

When you have completed

this exercise, add the disk you removed from the

disk group back to the tes tdg disk group so that you can use it in later labs.

For example:
vxdg -g testdg

8-96

adddisk

VERITAS

testdg02=device_tag

Storage

COpYfI<,Ihl',~ 20(}6 Svmaruec Corporation

Foundation
All rig!'>!!'reserved

5.0 for UNIX: Fundamentals

Optional Lab Exercises


The next set of lab exercises is optional and may be performed i I' you have time.
These exercises provide additional recovery scenarios, as well as practice in
replacing physical drives and working with spare disks. A final activity explores
how to use the Support website, which is an excellent troubleshooting resource.
Optional Lab: Recovering from Intermittent Disk Failure (2)
In this optional lab exercise, intermittent disk failures are simulated, and the
system has slowed down significantly, so that it is not possible to evacuate data
from the failing disk. The lab script run_disks
sets up the test volume
configuration and validates your solution for resolving the problem. Ask your
instructor for the location of the run _ d i sks script.
Before You Begin: Check to ensure that the environment variable
name of the testdg
disk group:
echo

If

DG

DG

is set to the

$DG

is not set. set it before you continue:

DG="testdg"
export
DG

From the directory that contains the lab scripts, run the script run_disks,
select option 4. "Intermittent Failures (system too slow)":

and

./run_disks
1) Lab 1 - Turned
off
drive
(temporary
failure)
2) Lab 2 - Power failed
drive
(permanent
failure)
3) Lab 3 - Intermittent
Failures
(system
still
ok)
4) Optional
Lab 4 - Intermittent
Failures
(system
too
slow)
5) Optional
Lab 5 - Turned
off
drive
with
layered
volume
6) Optional
Lab 6 - Power failed
drive
with
layered
volume
xl Exit
Your

Choice?

This script sets up two volumes:


testl

with a mirrored layout

test2

with a concatenated layout

Note: I I'you receive an error message about the / image file system becoming
full during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. You are informed that:

Lab 7 Solutions:

Resotving

Hardware

8-97

Probtems

Copyright I: 2006 Symantec Corporation

All rights reserved

The disk drive used by both volumes is experiencing intermittent failures


that need to be addressed immediately.
The system has slowed down significantly,
the disk before removing it.
3

so it is not possible to evacuate

In a second terminal window, perform the necessary actions to resolve the


problem.
Assume that testdg02 (c1t2dO for a Solaris or HP-UX system or sdb
for a Linux system and with plex test1- 01 from the mirrored volume
test1) is the drfve experiencing intermittent problems (actual device
name will vary by system).
Note: When performing recovery procedures, run vxprint and vxdisk
list
often to seewhat is changing after issuing recovery commands:
vxprint
vxdisk

-g testdg
-0

-htr

a11dgs list

To recover:
a

Remove the failing disk for replacement by using the vxdiskadm


menu interface. Invoke vxdiskadm:
vxdiskadm

From the vxdiskadm main menu, select the option, "Remove a disk
for replacement". Remove the disk testdg02. 00 not use a
replacement disk yet.
Note: If you receive an error while using vxdiskadm about a
operation requiring a disk group, ignore the error.

vxprint

To ensure that you have an un initialized new disk to use as the


replacement disk, you may need to copy zeros to the beginning of the
failing disk and then uninitialize it before using it as the replacement
disk.
To carry out this task, you can use the / script_l ocat ion/bin/
cleandisk
device_tag command where script_location
is
the home directory from which you are running the automated lab
scripts. For example:
Solarls,

/script

- location/bin/cleandisk

clt2dO

/script

- locat~on/bin/cleandisk

sdb

HP-UX
Linux

Replace the failed disk with a new disk by using the vxdiskadm menu
interface. From the vxdiskadm main menu, select the option,
"Replace a failed or removed disk." Select an uninitialized disk to
replace testdg02.
Note: I f you receive an error while using vxdiskadm about a

8-98

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


COPYrighl '~' 2006 S~rT'<lnlf'r Corporation

All, .:',.,~, "('Iwd

operation requiring a disk group, ignore the error.

vxprint

Start the nonredundant volume.


vxvol

-g

-f

testdg

start

test2

After you resolve the problem. type e in the lab script window. The script
verities whether your solution is correct.

Optional Lab: Recovering from Temporary Disk Failure Layered


Volume
In this optional lab exercise. a temporary disk failure is simulated. Your goal is to
recover all of the volumes that were on the failed drive. The lab script run _ di sks
sets up the test volume configuration and validates your solution for resolving the
problem. Ask your instructor for the location of the run _ d i sks script.
Before You Begin: Check to ensure that the environment variable DG is set to the
name of the testdg
disk group:
echo

$DG

If DG is not set. set it before you continue:


DG="testdg"
export
DG

From the directory that contains the lab scripts. run the script run_disks.
select option 5. "Turned off drive with layered volume":

and

./run_disks
1) Lab 1 - Turned
off
drive
(temporary
failure)
2) Lab 2 - Power failed
drive
(permanent
failure)
3) Lab 3 - Intermittent
Failures
(system
still
ok)
4) Optional
Lab 4 - Intermittent
Failures
(system
too
slow)
5) Optional
Lab 5 - Turned
off
drive
with
layered
volume
6) Optional
Lab 6 - Power failed
drive
with
layered
volume
x) Exit
Your

Choice?

This script sets up two volumes:


testl

with a concat-mirror layout

test2

with a concatenated layout

Note: I I'you receive an error message about the / image file system becoming
full during volume setup. ignore the error message. This error will not have
any impact on further lab steps or lab results.

Lab 7 Solutions:

Resolving

Hardware

8-99

Problems

Copynghl rt: 2006

Svmantec

Corporation.

All

riqhts

reserved

Read the instructions in the lab script window. The script simulates a disk
power-off by saving and overwriting the private region on the drive that is used
by both volumes, Then. when you are ready to power the disk back on, the
script restores the private region as it was before the failure.
Assume that the failure was temporary. In a second terminal window, attempt
recover the volumes.

to

Assume that the drive that was turned off and thcn back on was cl t2dO
for a Solaris or HP-UX system or sdb for a Linux system (actual dcvicc
namc will vary by system).
Notc: When pcrforming recovery procedures, run vxprint
and vxdisk
list
often to seewhat is changing after issuing recovery commands:
vxprint
vxdisk

-g
-0

testdg
alldgs

-htr
list

To recover from thc temporary failure:


a

Ensure that the opcrating system recognizcs thc device:


Solaris

devfsadm
1'I\0te: Because you have not changed the SCSI location of the
drive. running devf sadmmay not be necessary. However.
running this command vcruics the existence and validity of the
disk label. Prior to Solaris 7. you can use drvconf ig and
disks.

HP-UX

ioscan
insf
-e

Linux

partprobe

-c

disk
/dev/sdb

Vcrify that thc operating system rccognizes the device:


Solaris

prtvtoc

HP-UX

ioscan

/dev/rdsk/clt2dOs2
-fne

disk

(Verify that the disk is in CLAIMED


Linux

8-100

state.)

-1 /dev/sdb

Forcc the VxVI\1 conliguration daemon to reread all ofthe drives in


the system:
vxdctl

fdisk

enable

Reattach the device to the disk media record:


vxreattach

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


CupYrlyhl'~ 20;')6Symantec Corporauon AUuulns reserved

Recover the volumes:


vxrecover

Start the non redundant


vxvol
4

-g

volume:

-f

testdg

start

test2

Alter you recover the volumes. type e in the lab script window. The script
verifies whether your solution is correct.

Optional Lab: Recovering from Permanent Disk Failure Layered


Volume
In this optional lab exercise. a permanent disk failure is simulated. Your goal is to
replace the failed drive and recover the volumes as needed. The lab script
run_disks
sets up the test volume configuration and validates your solution for
resolving the problem. Ask your instructor for the location of the run _ di sks
script.
Before You Begin: Check to ensure that the environment variable DG is set to the
name of the testdg
disk group:
echo

$DG

If DG is not set. set it before you continue:


DG="testdg"
export
DG

From the directory that contains the lab scripts, run the script run_disks.
select option 6, "Power failed drive with layered volume":

and

./run_disks
1) Lab 1 - Turned
off
drive
(temporary
failure)
2) Lab 2 - Power failed
drive
(permanent
failure)
3) Lab 3 - Intermittent
Failures
(system
still
ok)
4) Optional
Lab 4 - Intermittent
Failures
(system
too
slow)
5) Optional
Lab 5 - Turned
off
drive
with
layered
volume
6) Optional
Lab 6 - Power failed
drive
with
layered
volume
x) Exit
Your

Choice?

This script sets up two volumes:

Lab 7 Solutions:

Resolving

Hardware
Copyright

8-101

Problems
i~2006

Symantec

Corporation

All nqhts

reserved

testl

with a concur-mirror

layout

test2 with a concatenated layout

Note: Ifyou receive an error message about the / image file system becoming
full during volume setup, ignore the error message. This error will not have
any impact on further lab steps or lab results.
2

Read the instructions in the lab script window. The script simulates a disk
power-uffby
saving and overwriting the private region on the drive that is used
by buth volumes, The disk is detached by YxYM.

In a second terminal window, replace the permanently jailed drive with either a
new disk at the same SCSI location or by another disk at another SCSI
location. Then, recover the volumes.

Assume that the failed disk is testdg02


(el t2dO for a Solaris or HP-UX
system or sdb for a Linux system) and the new disk used to replace it is
elt3dO
fur a Solaris HP-LJX system ur sdd for a Linux system, which is
originally uninitialized (actual device names will vary by system).
Note: When performing recovery procedures, run vxprint
and vxdisk
uften to seewhat is changing after issuing recovery commands:

list

vxprint
vxdisk

-g
-0

testdg
alldgs

-htr
list

Tu recover from the permanent failure:


a

Initialize the new drive:


Solaris.
1U'-liX

vxdisksetup

-i

clt3dO

Linux

vxdisksetup

-i

sdd

Attach the disk media name (testdg02)


Solaris,

tu the new drive:

vxdg

-g

testdg

-k

adddisk

testdg02=clt3dO

vxdg

-g

testdg

-k

adddisk

testdg02=sdd

HP-UX

Linux

Recover the volumes:


vxrecover

Start the nonredundant volume:


vxvol

-g

testdg

-f

start

test2

Alternatively, you can use the vxdiskadm

menu interface:

VERITAS Storage Foondation 5.0 for UNIX: Fundamentals

8-102
COPYright!

2006 Symaruec

Corporation

All nghts reserved

a
b

Invoke vxdiskadm:
vxdiskadm
From the vxdiskadm main menu, select the option, "Replace a failed
or removed disk." When prompted, select c1t3dO for a Solaris or
HP-liX system or sdd for a Linux system to initialize and replace
testdg02.
Note: If you receive an error while using vxdiskadm about a
vxprint
operation requiring a disk group, ignore the error.
Start the nonredundant volume:
vxvol
-g testdg
-f start

test2

Alter you recover the volumes, type e in the lab script window. The script
verifies whether your solution is correct.

5 When you have completed this exercise, if the disk device that was originally
used during disk failure simulation is in onl ine inval id state, reinitialize
the disk to prepare for later labs.
For example:
vxdisksetup

-i

device_tag

The rest of this lab exercise includes optional lab instructions where you perform a
variety of basic recovery operations.

Optional

Lab: Removing a Disk from VxVM Control

Destroy the testdg disk group and add the three disks back to the namedg
disk group. At this point you should have one disk group called namedgwith
four empty disks in it. There should be no volumes in the namedgdisk group.
If you had destroyed the namedgdisk group in previous lab sections, re-create
it.
vxdg
vxdg

destroy
testdg
ini t namedg namedg01=devi

ce_ tagl

(if the disk group docs

not exist)

vxdg

-g namedg adddisk

namedg03=device_tag3

namedg02=device_tag2
namedg04=device_tag4

In the namedgdisk group create a I OO-MB. mirrored volume named


namevoll. Create a Veritas file system on namevoll and mount it to
/ namel directory.
vxassist
-g namedg make namevol1 100m layout=mirror
mkfs -F vxfs /dev/vx/rdsk/namedg/namevol1
Note: On LiIlUX, usemkfs - t.

Lab 7 Solutions:

Resolving Hardware Problems


Copyright '~2006 Symantec Corporation. All nqhts reserved

8-103

/ namel (if necessary)

mkdir

mount -F vxfs /dev/vx/dsk/namedg/namevoll


Note: 011 Linux, usemount -to
3

/namel

Display the properties of the volume. In the table, record the device and disk
media name of the disks used in this volume.
vxprint
-g namedg -thr
For example, the volume namevoll
uses namedg02 and namedg04:

Solaris,

Device Tag

Disk Media Name

Disk I

clt2dO

namedg02

Disk 2

clt3dO

namedg04

Device Tag

Disk ;\Iedia Name

Disk I

sde

namedg02

Disk 2

sdf

namedg04

HP-UX

Linux

Remove one of the disks that is being used by the volume for replacement.
vxdg

-0

alldgs

list

From the command line. check that the state of one of the plexes is DISABLED
and REMOVED.
vxprint

namedg02

Confirm that the disk was removed.


vxdisk

-g namedg -k rmdisk

-g namedg -thr

If you are not already logged in VEA, start VEA and connect to your local
system. Check the status of the disk that has been removed.
III VEA, the disk is shown as disconnected, because the disk has been
removed for replacement.

Replace the disk back into the namedg disk group.


vxdg -g namedg -k adddisk
namedg02=device_tag
where devi ce_ tag is c#t#d# for Solaris and HP-lJX, hdisk#
and sd# for Linux platforms.

for AIX

Check the status of the disks. What is the status of the replaced disk?
vxdisk
-0 alldgs
list
The status of the disk is ONLINE.

10 Display volume information. What is the state of the plexes ofnamevoll'!


B-104

VERITAS

Storage

Foundation

Copynqbt ~,2000:.Symantec Corporauon. All ngh:s reserved

5.0 for UNIX: Fundamentals

vxprint
-g namedg -thr
The plex using the disk you removed and replaced is marked RECOVER.
11 In YEA. what is the statusof the replaceddisk') What is the statusof the

volume')
The disk is reconnected; its status shows Imported as normal. Select the
volume in the left pane, and click the Mirrors tab in the right pane. The
plex is marked recoverable.
12 From the command line. recover the volume. During and alter recovery.check

the statusof the plex in anothercommand window and in VEA.


vxrecover
In YEA, the status of the plex changes to Recovering, and eventually to
Attached.
With vxprin

t, thc status of the plcx changes to STALE and eventually to

ACTIVE.

Optional Lab: Replacing Physical Drives (Without Hot Relocation)


Note: If you have skipped the previous optional lab section called Removing a
Disk From YxYM Control. you may needto destroy testdg and add the three
disks back to the namedg disk group before you start this section. If you had
destroyedthe namedg disk group in previous lab sections.re-createit.
If necessary:
vxdg

destroy

vxdg init
exist)
vxdg

testdg

namedg namedgOl=device_tagl(ifthediskgroupdoesnot

-g namedg adddisk
namedg03=device_tag3

namedg02=device_tag2
namedg04=device_tag4

Ensurethat the namedg disk group hasa mirrored volume called namevoll
with a Yeritas tile systemmounted on /namel. If not. createa IO()-MB
mirrored volume called namevoll in the namedg disk group. add a YxFS
file system to the volume. and mount the file systemat the mount point
/namel.
[fnecessary:
vxassist
-g namedg make namevoll
100m layout=mirror
mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll
Note: On Linux, usemkfs -to
mkdir
mount

Lab 7 Solutions:

/ namel (if necessary)


-F vxfs /dev/vx/dsk/namedg/namevoll

Resolving

Hardware

8-105

Problems

Copyright:: 2006 svmaotec Corporation

/namel

All rights reservec

Note: On Linux, use mount


2

- t.

If the vx r e I ocd daemon is running. stop it using ps and ki 11,in order to


stop hot relocation from taking place. Verify that the vxrelocd processes are
killed before you continue.
Notes:
Ifyou have executed the run_di sks script in the previous lab sections,
the vxrelocd daemon may already be killed.

There are two v x r e Loc d processes on the Solaris platform. You must kill
both of them at the same time.
ps -ef I grep vxrelocd
kill
-9 pidl
[pid21
ps -ef I grep vxrelocd
3

Next, simulate disk failure by writing over the private region using the
overwri tepr script followed by vxdctl disable and vxdctl
enabl e commands. Ask your instructor for the location of thc script.
While using the script. substitute the appropriate disk device name for one of
the disks in use by namevoll. lor example on Linux use sbd,on Solaris and
HP-UX use clt8dO.
cd /script_location
./overwritepr
device_tag
vxdctl
disable
vxdctl
enable

When the CITor occurs. view the status of the disks from the command line.
vxdisk
-0 alldgs
list
The physical device is no longer associated with the disk media name and
the disk group.

View the status of the volume from thc command line.


vxprint
-g namedg -thr
The plex displays a status of DISABLED NODEVICE.

In VEA, what is the status of the disks and volume')


Note: On the HP-UX platform, the vxdctl disable command may cause
the StorageAgcnt used by the VEA GUI to hang. If this happens, the VEA GUI
does not detect the changes. Use the following command to restart the agent:
/opt/VRTSobc/pa133/bin/vxpa1ctr1 -a StorageAgent \
-c restart
The status of the disk is Disconnected, and the volume has a Recoverable
status for the plex,

8-106

Rescan tor all attached disks:

VERITAS Storage Foundation 5.0 for UNIX Fundamentals


Copynght (~2006 Svmanrec coroc.aucn

All rights reserved

vxdctl

enable

Recover the disk by replacing the private and public regions on the disk. In the
command. substitute the appropriate disk device name. for example on l.inux.
use sbd:
vxdisksetup

-i

device_tag

Note: This step is only necessary when you replace the failed disk with a brand
new one. If it were a temporary failure. this step would not be necessary.
9

Bring the disk back under YxYM control:


vxdg

-g

namedg

-k

adddisk

disk_name=devlce

tag

where disk nameis the disk media name of the failed disk and
devi ce_ tag is the device name of the disk device used to replace the failed
one.
10 Check the status of the disks and the volume.
vxdisk
vxprint

alldgs

-0

list

-thf

11 From the command line. recover the volume.


vxrecover
12 Check the status of the disks and the volume to ensure that the disk and volume

are fully recovered.


vxdisk
vxprint

alldgs

-0

-g

list

namedg

-thr

13 Unmount the / namel file system and remove the namevoll


umount

volume.

/namel

vxassist

-g

namedg

remove

volume

namevoll

Optional Lab: Exploring Spare Disk Behavior


Note: If you have not already done so. destroy testdg and add the three disks
back to the namedg disk group before you start this section.
I f necessary:
vxdg

destroy

testdg

vxdg

-g namedg adddisk
namedg03=device_tagJ

namedg02=device_tag2
namedg04=device_tag4

You should have four disks (namedgO1 through namedg04) in the disk group
namedg. Set all disks to have the spare !lag on.

Lab 7 Solutions:

Resolving

Hardware
Copynghl~

8-107

Problems
2006

Syrnantec

Corporation

All rights

reserved

vxedit
vxedit
vxedit
vxedit
2

-g
-g
-g
-g

namedg
namedg
namedg
namedg

Createa IOO-MB
vxassist

set
set
set
set

spare=on
spare=on
spare=on
spare=on

namedgOl
namedg02
namedg03
namedg04

mirrored volumecalled sparevol.

-g namedg make sparevol

100m layout=mirror

Is the volume successfullycreated?Why or why not?


No, the volume is not created, and you receivethe error:
... Cannot allocate
space for size block volume
The volume is not created becauseall disks arc set as spares,and
vxassist
or VEA do not tind enough free spaceto create the volume.
3

Attempt to createthe samevolume again.but this time specify two disksto


use.Do not clearany spare flagson thedisks.
vxassist
-g namedg make sparevol
100m \
layout=mirror
namedg03 namedg04
Notice that VxVM overrides its default and applies the two spare disks to
the volume becausethe two disks were specilied by the administrator.

Removethe sparevol
vxassist

volume.

-g namedg remove volume

Verify that the relocationdaemon(vxrelocd)

sparevol
is running. 11'1101, start it as

follows:

vxrelocd
6

B-108

spare=off
spare=off
spare=off

namedgOl
namedg02
namedg03

-g namedg make spare2vol

Savetheoutputofvxprint
vxprint

-g namedg set
-g namedg set
-g namedg set

Createa IOO-MB concatenatedmirrored volumecalled spare2vol.


vxassist

&

Removethe spare flags from threeof the four disks.


vxedit
vxedit
vxedit

root

-g namedg -thr

100m layout=mirror

-g namedg -thrtoafile.
>

/tmp/savedvxprint

Display the propertiesof the spare2vol


volume. In the table,recordthe
deviceanddisk medianameof thedisks usedin this volume.Youarc going to
simulatedisk failure on oneof thedisks. Decidewhich disk Y0U aregoing to
fail.

VERITAS Storage Foundation 5.0 for UNIX. Fundamentals

For example, the volume spare2vol

uses namedgOl

and namedg02:

Device Tag

I>isk Media Name

Disk I

clt2dO

namedgOl

I>isk 2

clt3dO

namedg02

10 Next, simulate disk failure hy writing over the private region using the
overwri tepr script followed by vxdctl
disable and vxdctl
enabl e commands.
Ask your instructor for the location of the script.
While using the script. substitute the appropriate disk device name for one of
the disks in use by spare2voL for example on Linux use sbd, on Solaris
and HP-UX use clt8dO.
cd

/scr~pt

./overwritepr

l ocst i on
device_tag

vxdctl

disable

vxdctl

enable

11 Run vxprint
- 9 namedg - rth and compare the output to the vxprint
output that you saved earlier. What has occurred')
Note: You may need to wait a minute or two for the hot relocation to complete.
Hot relocation has taken place. The failed disk has a status of NODEVICE.
VxVM has relocated the mirror of the failed disk onto the designated
spare disk.
12 In VEA, view the disks. Notice that the disk is in the disconnected state.
Note: On the IIP-UX platform, the vxdctl
disable commandmay cause
the StorageAgent used by the VEA GUI to hang. If this happens, the VEA GUI
does not detect the changes. Use the following
/opt/VRTSobc/pa133/bin/vxpalctrl
-c restart
13 Run vxdisk

-0

alldgs

This disk is displayed

list.

command to restart the agent:


-a StorageAgent \

What do you notice')

as a failed disk.

14 Rcscan for all attached disks.


vxdctl

enable

15 In VEA, view the status of the disks and the volume.


Uighlight the volume and click each of the tabs ill the right pane. You call
also select Actions->Volume
View and Actlons=c-Dtsk View to view
status information.

Lab 7 Solutions:

Resolving

Hardware

8-109

Problems

Copyright 'E 2006 Symantec Corporation. All

rigt1\5

reserved

16 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example on Linux,
use sbd:
vxdisksetup

-i

device_tag

17 Bring the disk back under VxVM control and into the disk group.
vxdg -g namedg -k adddisk
namedg##=device_tag
18 In VEA, undo hot relocation for the disk.
Right-click the disk group and select Undo Hot Relocation. In the dialog
box, select the disk for which you want to undo hot relocation and click
OK. After the task completes, the alert on the disk group should be
removed.
Alternatively,

from the command line, run:

vxunreloc
-g namedg namedg##
where namedg## is the disk media name of the failed and replaced disk.
19 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume arc fully recovered.
vxdisk
-0 alldgs
list
vxprint
-g namedg -thr
Note: The vxprint
command shows the subdisk with the UR tag.
20 Remove the spare2vol volume.
vxassist

Optional

-g namedg remove

volume

spare2vol

Lab: Using the Support Web Site

Access the latest information on VERITAS Storage Foundation.


Note: If you are working in the Virtual Academy lab environment, you may
not be able to accessthe Veritas Technical Support web site, becausethe DNS
configurauon was changedduring software installation by the prepare_Ds
script. To restore the original DNS configuration, change to the directory
containing the lab scripts. execute the re store _DSscript and try to access
the web site again.
If necessary:
cd /scr~pt
./restore

locat~on
ns

Go to the VERIT AS Technical Support Web site at


http://support.veritas.com.
From the Select Product Family menu, select Storage Foundation. From
the Select Product menu, select Storage Foundation for UNIX.

8-110

VERITAS Storage Foundation 5.0 for UNIX Fundamentals

On the next window,

click the "documents

published

in the last 30 days"

link.
This will show you any information
Foundation
2

that has been published

What is the VERITAS Support mission statement'?


Hint: It is in the Support Handbook (page 3).
Select the Support

Handbook

link at the bottom of the page. On page 3:

"'VI' will provide world-class technical


advocate to maximize their investment
3

expertise acting as the customer


in VERIT AS solutions."

How many on-site support visits are included in a Extended Support contract?
How about with a Business Critical Support?
Hint: In the Support Handbook, see table on page 4 and explanation on page S.
Extended

Support:

Business Critical
4

for Storage

in the last 30 days.

No on-site support
Support:

visits are included.

Six on-site support

visits are included.

Which AIX platform is supported for Storage Foundation s.o'?


Select Compatibility

& Reference link under Support

Resources title.

Set Show Document Types drop-down list to Manuals and


Documentation.
Set Show Results For drop down list to 5.0(AIX).
Select the VERIT AS Storage Foundation (tm) 5.0 - Release Notes (AIX).
See the Supported software section with the following information:
-

AIX 5.2 ML6 (legacy)

AIX 5.3 TL4 with SJl4


Veritas 5.0 products also operate on AIX 5.3 with SP3, but you must
install an AIX interim fix. See the following TechNote for information
on downloads, service pack availability,
related to this release.

and other important

issues

http://support.veritas.com/docs/282024
5

Access a recent Hardware Compatibility List for Storage Foundation. Which


Brocade switches are supported by VERITAS Storage Foundation and Iligh
Availability Solutions 5.0 on Solaris?
& Reference tab.

Select the Compatibility


Click on the appropriate
6

MCL link.

Where would you locate the Patch with Maintenance Pack I for VERITAS
Storage Solutions and Cluster File Solutions 4.0 fix Solaris?
Select the Software Updates & Downloads
and locate the patch.

link on the left navigation

bar

Perform this step only if you are working in the Virtual Academy lab
environment. If you have executed the restore_ns script to restore the

Lab 7 Solutions:

Resolving

Hardware
Copyright

8-111

Problems
<12006

Symantcc

Corporation.

All rights

reserveu

name resolution configuration at the beginning of this lab section in step I,


change to the directory containing the lab scripts and execute the
prepare _ns script before you continue,
If necessary:

cd /script
./prepare_ns

8-112

locat~on

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

Glossary
A
A list of
users or groups whu have access privileges
to a specified file. A file may have its own
ACL or may share an ACL with other tiles.
ACLs allow detailed access permissions
lor multiple users and groups.
access control list (ACL)

This type of
muliipathed disk array enables you to
access a disk in the disk array through all
the paths to the disk simultaneously.

active/active

unit A basic structural


component of YxFS. The YxFS Version 4
and later file system layout divides the
entire file system space into fixed size
allocation units. The first allocation unit
starts at block zero. and all allocation units
are a fixed length of 32K blocks.
allocation

disk arrays

This type
of multipathed disk array allows one path
to a disk to be designated as primary and
used to access the disk at any time.
active/passive

disk arrays

A process that manages predefined


YERITAS Cluster Server (YCS) resource
types. Agents bring resources online. take
resources offline. and monitor resources to
report any state changes to YCS. When an
agent is started. it obtains configuration
information from yeS and periodically
monitors the resources and updates yeS
with the resource status.
agent

label Data on disk


that identifies the disk to the AIX volume
manager (LYM) as being controlled by
VxYM. The contents have no relation to
YxYM lD Blocks.

AIX coexistence

volume
A volume created
by the intelligent storage provisioning
(lSP) feature of YERITAS Volume
Manager (YxYM).
application

associate
The process of establishing a
relationship between YxYM objects; for
example. a subdisk that has been created
and defined as having a starting point
within a plcx is referred to as being
associated with that plex.
associated

plex

A plcx associated with

a volume.
associated

subdisk

A subdisk

associated with a plex.


writes A delayed write
in which the data is written to a page in the
system's page cache. but is not written to
disk before the write returns to the caller.
This improves performance. but carries the
risk of data loss if the system crashes
before the data is flushed to disk.
asynchronous

An operation that
either succeeds completely or Iai Is and
leaves everything as it was before the
operation was started. I f the operation
succeeds. all aspects of the operation take
effect at once and the intermediate states of
change are invisible. If any aspect of the
operation fails, then the operation aborts
without leaving partial changes.
atomic operation

An indication that an error or failure


has occurred on an object on the system.
When an object fails or experiences an
error. an alert icon appears.
alert

alert icon An icon that indicates that an


error or failure has occurred on an object
on the system. Alert icons usually appear
in the status area of the YEA main window
and on the affected object's group icon.

attached
A state in which a YxYM
object is both associated with another
object and enabled lor use,

Gtossary-1
Copyright

if: 2006 Svrnantec

Corporation.

All rigtm.

reserved.

attribute

Allows the properties of a LUN

to be defined in an arbitrary conceptual


space. such

as a manufacturer

or location.

button

A window

control that the user

clicks to initiate a task or display another


object (such as a window or menu).

back-rev disk group A disk group


created using a version of Yx YM released

capability

prior to the release of CDS. Adding CDS


functionality rolls over to the latest disk
group version number.

exhibit capabilities, such as performance


and reliability to various degrees. Applies
to the ISP feature ofYxYM.

block

CDS disk A disk whose contents and

Block-Level Incremental Backup (BLI


Backup) A YERITAS backup capability

attributes
for CDS
contrast,
used for
group.

The minimum unit of data transfer


to or from a disk or array.

that docs not store and retrieve entire files.


Instead. only the data blocks that have
changed since the previous backup are
backed up.

boot disk

A disk used for booting

purposes. This disk may be under YxYM


control for some operating systems.

boot disk group

A feature that is provided by a


volume. For example, a volume may

are such that the disk can be used


as part of a CDS disk group. In
a non-CDS disk can neither be
CDS nor be part of a CDS disk

CDS disk group

A YxYM disk group


whose contents and attributes are such that
the disk group can be used to provide tor
cross-platform data sharing. In contrast, a
non-CDS disk group (that is. a back-rev
disk group or a current-rev

A disk group that

disk group)

cannot be used lor cross-platform

data

contains the disks from which the system


may be booted.

sharing. A CDS disk group can only


contain CDS disks.

bootdg

CFS

A reserved disk group name that


is an alias for the name of the boot disk
group.

browse dialog box A dialog box that is


used to view and/or select existing objects
on the system. Most browse dialog boxes
consist of a tree and grid.

buffered 1/0

During a read or write

operation. data usually goes through an


intermediate tile system butter before
being copied between the user buffer and
disk. If the same data is repeatedly read or
written, this file system buffer acts as a
cache. which can improve performance.
See direct 1'0 and nnbuffcrcd//0.

YERITAS

Cluster File System.

check box A control button used to


select optional settings. A check mark
usually indicates that a check box is
selected.

children

Objects that belong to an object

group.

clean node shutdown

The ability ofa


node to leave the cluster gracefully when

all access to shared volumes has ceased.

clone pool A storage pool that contains


one or more full-sized instant volume
snapshots of volumes within a data pool.
Applies to the ISP feature ofYxYM.
cluster A set of host machines (nodes)
that share a set of disks.

Glossary-2

VERITAS

Storage

Foundation

5.0 for UNIX: Fundamentals

cluster file system


A Y xFS fi lc system
mounted on a selected volume in cluster
(shared) mode.
cluster manager
An externallyprovided daemon that runs on each node in
a cluster. The cluster managers on each
node communicate with each other and
inform YxYM of changes in cluster
membership.
file system
A shared
file system that enables multiple hosts to
mount and perform file operations on the
same tile. A cluster mount requires a
shared storage device that can be accessed
by other cluster mounts of the same file
system. Writes to the shared device can be
performed concurrently from any host on
which the cluster file system is mounted.
To be a cluster mount, a file system must
be mounted using the mount -0
cl uster option. See local monnted fi!e
cluster mounted

.11'.111.'111.

cluster-shareable
disk group
A disk
group in which the disks are shared by
multiple hosts (also referred to as a shared

database
A set of
records containing detailed infonnation on
existing YxYM objects (such as disk and
volume attributes). A single copy of a
configuration database is called a
con figuration copy.
configuration

file A file in which data


blocks are physically adjacent on the
underlying media.

contiguous

Cross-platform

Data Sharing

(CDS)

Sharing data between heterogeneous


systems (such as SUN and HP). where
each system has direct access to the
physical devices used to hold the data and
understands the data on the physical
device.
disk group
A disk group
created using a version of YxYM
providing CDS functionality; however, the
CDS attribute is not set. I f the CDS
attribute is set for the disk group, the disk
group is called a CDS disk group.
current-rev

The cluster functionality of


YERITAS YxYM.

CVM

disk group).

A set of one or more subdisks


within a striped plex. Striping is achieved
by allocating data alternately and evenly
across the columns within a plex.
column

D
Blocks that contain the
actual data belonging to files and
directories.
data blocks

A YxYM
object that is used to manage information
about the FastResync maps in the DCO log
volume. Both a DCO object and a DCO
log volume must be associated with a
volume to implement Persistent
FastResync on that volume.

command log A log file that contains a


history of YEA tasks performed in the
current session and previous sessions.
Each task is listed with the task originator,
the start/finish times. the task status. and
the low-level commands used to perform
the task.

data change object (DCO)

concatenation
A layout style
characterized by subdisks that arc arranged
sequentially and contiguously.

data pool

configuration

copy

configuration database.

A single copy of a

The first storage pool that is


created within a disk group. Applies to the
ISP feature ofYxYM.
data stripe This represents the usable
data portion of a stripe and is equal to the
stripe minus the parity region.

Glossary-3
Copynght ~.2006 Symanter Corporation. All fights reserved

data synchronous writes A form of


synchronous 1/0 that writes the tile data to
disk before the write returns, but only
marks the inode for later update. If the tile
size changes, the inode will be written
before the write returns. In this mode, the
tile data is guaranteed tu be on the disk
before the write returns, but the inodc
modification times may be lust if the
system crashes,
Deo log volume A special volume that
is used tu huld Persistent FastResync
change maps.
defragmentation Reorganizing data on
disk to keep tile data blocks physically
adjacent so as to reduce access times.
detached A state in which a VxVM
object is associated with another object,
but not enabled for use.
Device Discovery Layer (DOL) A
facility of VxVM for discovering disk
attributes needed for VxVM DMP
operation.
device name The device name or
address used to access a physical disk.
SUch as cOtOdOs2 on Solaris, cOtOdO
on HP-UX, hdiskl
on AIX, and hda on
Linux. In a SAN environment, it is more
convenient to use cnclosure-bascd nannng,
which forms the device name by
concatenating the name of the enclosure
(such as encO) with the disk's number
within the enclosure, separated by an
underscore (for example, encO _ 2). The
term disk access name can also be used to
refer to a device name.
dialog box A window in which the user
submits information tu VxVM. Dialog
boxes can contain selectable buttons and
fields that accept information.
direct extent An extent that is
referenced directly by an inodc.

Glossary-4

direct I/O An unbuffered form of 1/0


that bypasses the tile system's buffering of
data. With direct 1/0, the tile system
transfers data directly between the disk and
the user-supplied butler. See buffered 110
and IIl1hll/t('I'{'t/ I/O.
dirty region logging The procedure by
which the Vx VM monitors and logs
modi fications to a plex. A biunap of
changed regions is kept in an associated
subdisk called a /og subdisk,
disabled path A path to a disk that is
not available for 110. A path can be
disabled duc to real hardware failures or if
the user has used the vxdmpadm
disable
command on that controller.
discovered direct I/O Discovered
Direct 1;0 behavior is similar to direct 1/0
and has the same alignment constraints,
except writes that allocate storage or
extend the tile size do not require writing
the inodc changes before returning to the
application.
disk A collection of read/write data
blocks that are indexed and can be
accessed fairly quickly. Each disk has a
universally unique identifier.
disk access name The name used to
access a physical disk. The c#t#d#s#
syntax identities the controller, target
address, disk, and partition. The term
device 1I<1111e can also be used to refer to the
disk access name.
disk access records Configuration
records used to specify the accesspath to
particular disks. Each disk access record
contains a name, a type, and possibly some
type-specific information. which is used by
the VxVM in deciding huw to access and
manipulate the disk that is defined by the
disk access record.

VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals


COPyllght ., 2()Ob Svmaruec ccroorauc

AI! fights rcservec

disk array A collection of disks


logically arranged into an object. Arrays
tend to provide benefits, such as
redundancy or improved performance.
disk array serial number
This is the
serial number of the disk array.: it is
usually printed on the disk array cabinet or
can be obtained by issuing a vendorspecific SCSI command to the disks on the
disk array. This number is used by the
[)MP subsystem to uniquely identify a disk
array.
disk controller
The controller (lIBA)
connected to thc host or the disk array that
is represented as the parent node of the
disk by the operating system; it is called
the disk controller by the multipathing
subsystem ofVxVM.
For example, if a disk is represented by the
device name:
/devices/sbus@lf,O/
QLGC,isp@2,lOOOO/sd@8,O:c
then the disk controller for the disk
sd@8 r
cis:

disk group ID A unique identifier used


to identify a disk group.
disk ID A universally unique identifier
that is given to each disk and can be used
to identify the disk, even if it is moved.
disk media name A logical or
administrative name chosen for the disk,
such as disk03.
The term disk name is
also used to refer to the disk media name.
disk media record A configuration
record that idcnti fics a particular disk, by
disk ID, and gives that disk a logical (or
administrati ve) name.
disk name A logical or administrative
name chosen for a disk that is under the
control ofVxVM, such as disk03. The
term disk I1I1!dia 17(1l11eis also used to refer
to a disk name.
dissociate
The process by which any
link that exists between two Vx VM objects
is removed. For example, dissociating a
subdisk trom a plcx removes the subdisk
from the plex and adds the subdisk to the
tree space pool.
dissociated plex
from a volume.

QLGC,isp@2,lOOOO
This controller (HBA) is connected to the
host.
disk enclosure
An intelligent disk array
that usually has a backplane with a built-in
Fibre Channel loop, and which permits
hot-swapping of disks.
disk group A collection of disks that arc
under VxVM control and share a common
configuration. A disk group configuration
is a set of records containing: detailed
information on existing VxVM objects
(such as disk and volume attributes) and
their relationships. Each disk group has an
administrator-assigned name and an
internally defined unique !D.

A plex dissociated

dissociated subdisk
A subdisk
dissociated from a plex.
distributed lock manager
A lock
manager that runs on different systems and
ensures consistent access to distributed
resources.
dock To separate or attach the main
window and a subwindow,
Dynamic Multipathing (DMP) The
method that VxVM uses to manage two or
more hardware paths directing 1/0 to a
single drive.

Glossary-5
Copyright

t,[, 20G6

Symantec

Corporation.

All nqhts

rcsetvco

E
enabled path A path to a disk that is
available for 110,
encapsulation A process that converts
existing partitions on a specified disk to
volumes, Ifany partitions contain tile
systems, thc tile system table entries are
modi tied so that the file systems are
mounted on volumes instead,
Encapsulation is not applicable on some
systems,
enclosure

A disk array,

enclosure-based naming An
alternative disk naming method, beneficial
in a SAN environment. which forms the
device name by concatenating the name of
the enclosure (such as encO) with the
disk's number within the enclosure,
separated by an underscore (for example,
encO

2),

extent A group of contiguous file system


data blocks that arc treated as a unit. An
extent is detined by a starting block and a
length,
extent attributes The extent allocation
policies associated with a tile,
external quotas file A quotas tile
(named quotas) must exist in the root
directory of a tile system for quota-related
commands to work, See internal quotas
fil and quotasfile,

F
fabric mode disk A disk device that is
accessible on a Storage Area Network
(SAN) through a Fibre Channel switch,
FastResync A fast resynchronization
feature that is used to perform quick and
efficient rcsynchronizution of stale mirrors,
and to increase the efficiency of the
snapshot mechanism.

Glossary-6

VERITAS

Fibre Channel A collective name for the


fiber optic technology that is commonly
used to set up a Storage Area Network
(SAN),
file system A collection of files
organized together into a structure, The
UNIX tile system is a hierarchical
structure consisting of directories and files,
file system block The fundamental
minimum size of allocation in a tile
system, This is equivalent to the uf s
fragment size,
file system snapshot An exact copy
ota mounted tile system at a specific point
in time, Used to perform online backups,
fileset A collection of tiles within a tile
system,
fixed extent size An extent attribute
associated with overriding the default
allocation policy of the tile system,
fragmentation The ongoing process on
an active tile system in which the tile
system is spread further and further along
the disk, leaving unused gaps or fragments
between areas that are in use, This leads to
degraded performance because the tile
system has fewer options when assigning a
tile to an extent.
free disk pool Disks that are under
Yx VM control, but that do not belong to a
disk group,
free space An area of a disk under
YxYM control that is not allocated to any
subdisk or reserved for use by any other
YxYM object.
free subdisk A subdisk that is not
associated with any plcx and has an empty
putil [0] field,

Storage

Copyuqht ~ 2006 Svmaruoc Corporauon

Foundation
All

nghts reserved

5,0 for UNIX: Fundamentals

G
gap A disk region that does not contain
VxVM objects (subdisks l.
GB

Gigabyte (230 bytes or 1024

megabytes ).
graphical
view
A window that displays
a graphical view of objects. In VEA. the
graphical views include the Object View
window and the Volume Layout Details
window.
grid
A tabular display of objects and
their properties. The grid lists YxVM
objects. disks. controllers. or file systems.
The grid displays objects that belong to the
group icon that is currently selected in the
object tree. The grid is dynamic and
constantly updates its contents to reflect
changes to objects.
group icon
The icon that represents a
specific object group.
GUI

Graphical User Interface.

H
hard limit
The hard limit is an absolute
limit on system resources lor individual
users for lile and data block usage on a file
system. See quotas.
host

hot swap
Refers to devices that can be
removed from. or inserted into. a system
without lirst turning oil the pm\er supply
to the system.
HP-UX coexistence

label

Data on disk

that identi lies the disk to the HP volume


manager (LYM) as being controlled by
VxVM. The contents of this label are
identical to the contents of the VxYM ID
block.

I/O clustering
The grouping of multiple
110 operations to achieve better
performance.
indirect address extent
An extent that
contains references to other extents. as
opposed to tile data itself. i\ single indirect
address extent references indirect data
extents, A double indirect address extent
references single indirect address extents.
indirect data extent
An extent that
contains file data and is referenced through
all indirect address extent.
initiating
node
The node on which the
system administrator is running a uti lity
that requests a change to VxVM objects.
This node initiates a volume
rcconfiguration.

A machine or system.

hostid

A string that identities a


host to the VxVM. The hostid for a host
is stored in its volboot file. and is used
in defining ownership of disks and disk
groups.
hot relocation
A technique of
automatically restoring redundancy and
access to mirrored and RAID-5 volumes
when a disk fails. This is done by
relocating the affected subdisks to disks
designated as spares and/or tree space in
the same disk group.

inode
A unique identifier lor each file
within a file system. which also contains
metadaia associated with that file.
inode allocation
unit
A group of
consecutive blocks that contain inode
allocation information for a given filesct.
This information is in the form of a
resource summary and a tree inode map.
Intelligent
Storage Provisioning
(ISP)
ISP enables you to organize and
manage your physical storage hy creating
application volumes. ISP creates volumes
lrorn available storage with the required

Gtossary-7
Copyright ((: 2006 Svmantec Corporation. All rights reserved

capabilities that you specify. To achieve


this, ISP selects storage by referring to the
templates for creating volumes.
intent The intent of an ISP application
volume is a conceptualization of its
purpose as defined by its characteristics
and implemented by a template.
intent logging A logging scheme that
records pending changes to the tile system
structure. These changes are recorded in a
circular intent log tile.
internal quotas file VxFS maintains an
internal quotas file for its internal usage.
The internal quotas tile maintains counts of
blocks and inodes used by each user. See
extcrnal quotusfilc and quotas.

local mounted file system A file


system mounted on a single host. The
single host mediates all file system writes
to storage from other clients. To be a local
mount. a tile system cannot be mounted
using the mount -0 cluster option.
See cluster mountedfile svsteni.
log plex A plex used to store a RAID-5
log. The term log plex may also be used to
refer to a dirty region logging plex.
log subdisk A subdisk that is used to
store a dirty region log.
LUN Logical Unit Number. Each disk in
an array has a LUN. Disk partitions may
also be assigned a LUN.

M
J
JBOe The common name for an
unintelligent disk array which may, or may
not, support the hot-swapping of disks. The
name is derived from "just a bunch of
disks."

K
K

Kilobyte (2111 bytes or 1024 bytes).

L
large file A tile larger than 2 gigabytes.
VxFS supports files up to two terabytes in
size.
large file system A file system more
than 2 gigabytes in size. VxFS supports
tile systems up to 32 terabytes in size.
latency For tile systems, this typically
refers to the amount of time it takes a given
tile system operation to return to the user.
launch

To start a task or open a window.

main window The main VEA window.


This window contains a tree and grid that
display volumes, disks. and other objects
on the system. The main window also has a
menu bar and a toolbar.
master node A node that is designated
by the software as the "master" node. Any
node is capable of being the master node.
The master node coordinates certain
VxVM operations.
mastering node The node to which a
disk is attached. This is also known as a
disk (J\I'I/E'I'.
MB Megabyte (220 bytes or 1024
kilobytes).
menu A list of options or tasks. A menu
item is selected by pointing to the item and
clicking the mouse.
menu bar A bar that contains a set of
menus for the current window. The menu
bar is typically placed across the top of a
window.
metadata Structural data describing the
attributes of files on a disk.

Glossary-8

VERITAS
Cor."lllt,j

2f}116 Symaotec

Storage
Corporation

Foundation
All

fights

reserveo.

5.0 for UNIX: Fundamentals

A duplicate copy of a volume and


the data therein (in the form of an ordered
collection of subdisks). Each mirror is one
copy of the volume with which the mirror
is associated. The terms mirror and plcx
can be used synonymously.
mirror

A layout technique that


mirrors the contents of a volume onto
multiple plexes. Each plex duplicates the
data stored on the volume. but the plcxes
themselves may have different layouts.
mirroring

Where there are multiple


physical access paths to a disk connected
to a system. the disk is called multipathcd.
Any software residing on the host (for
example. the DMP driver) that hides this
fact from the user is said to provide
multipathing functionality.
multipathing

file system
A single file
system that has been created over multiple
volumes. with each volume having its own
properties.

multivolume

N
node

In an object tree. a node is an

element attached to the tree. In a cluster


environment. a node is a host machine in a
cluster.
A situation where a node
leaves a cluster (on an emergency basis)
without attempting to stop ongoing
operations.
node abort

node join The process through which a


node joins a cluster and gains access to
shared disks.
nonpersistent
FastResync
A form of
FastResync that cannot preserve its maps
across reboots of the system because it
stores its change map in memory.

o
An entity that is defined to and
recognized internally by YxYM. The
YxYM objects arc: volume. plex, subdisk,
disk. and disk group. There arc actually
two types of disk objects- -one type for the
physical aspect of the disk and the other for
the logical aspect.
object

A group of objects of the


same type. Each object group has a group
icon and a group name. In YxYM. object
groups include disk groups. disks.
volumes. controllers. free disk pool disks.
uninitializcd disks. and file systems.
object group

The
information needed to locate important file
system structural elements. The OLT is
written to a fixed location on the
underlying media (or disk).
object location table (OLT)

A copy
of the OLT in case ofdata corruption. The
OLT replica is written to a fixed location
on the underlying media (or disk).

object location table replica

object tree A dynamic hierarchical


display ofYxYM objects and other objects
on the system. Each node in the tree
represents a group of objects of the same
type.

A window that
displays a graphical view of the volumes.
disks. and other objects in a particular disk
group. The objects displayed in this
window are automatically updated when
object properties change. This window can
display detailed or basic information about
volumes and disks.
Object View Window

p
page file A fixed-size block of virtual
address space that can he mapped onto any
of the physical addresses available on a
system.

Gtossary-9
Ccpvriqbt

,t

2006

Svmantec

Corporation

All nqtrts

reserved

parity A calculated value that can be


used to reconstruct data after a failure.
While data is being written to a RAID-5
volume. parity is also calculated by
performing an exclusive OR (XOR)
procedure on data. The resulting parity is
then written to the volume. Ifa portion ofa
RAID-5 volume fails, the data that was on
that portion of the failed volume can be recreated from the remaining data and the
parity.
parity stripe unit A RAID-5 volume
storage region that contains parity
information. The data contained in the
parity stripe unit can be used to help
reconstruct regions of a RAID-5 volume
that are missing because of 110 or disk
failures.
partition The standard division of a
physical disk device. as supported directly
by the operating system and disk drives.
path When a disk is connected to a host,
the path to the disk consists of the Host
Bus Adapter (HBA) on the host. the SCSI
or fiber cable connector and the controller
on the disk or disk array. These
components constitute a path to a disk. A
failure on any of these results in DMP
trying to shift all l/Os for that disk onto the
remaining (alternate) paths.
pathgroup In case of disks that arc not
multipathed by vxdmp, VxVM will see
each path as a disk. In such cases, all paths
to the disk can be grouped. This way only
one of the paths from the group is made
visible to VxVM.
persistent FastResync A form of
FastResync that can preserve its maps
across reboots of the system by storing its
change map in a DCO log volume on disk.

prevents failed mirrors from being selected


for recovery. This is also known as kernel
logging,
physical disk The underlying storage
may not be under
VxVM control.

device, which mayor

platform block Data placed in sector 0,


which contains OS-specitic data for a
variety of platforms that require its
presence for proper interaction with each
of those platforms. The platform block
allows a disk to masquerade as if it was
initialized by each of the specific
platforms.
plex A duplicate copy of a volume and
the data therein (in the form of an ordered
collection of subdisks). Each plex is one
copy of the volume with which the plex is
associated.
preallocation The preallocation of
space for a file so that disk blocks will
physically be part of a file before they are
needed. Enabling an application to
preallocate space for a tile guarantees that
a specified amount of space will be
available for that tile. even if the file
system is otherwise out of space.
primary fileset A fileset that contains
the tiles that are visible and accessible to
users.
primary path In active/passive type disk
arrays, a disk can be bound to one
particular controller on the disk array or
owned by a controller. The disk can then
be accessed using the path through this
particular controller.
private disk group A disk group in
which the disks are accessed by only one
specific host.

persistent state logging A logging


type that ensures that only active mirrors
are used for recovery purposes and

Glossary-10

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copvr, ;1'1" 200f) Symantec Corporation

All fight"

rcserveo

private region
A region of a physical
disk used to store private, structured
VxVM information. The private region
contains a disk header. a table of contents,
and a configuration database.

The table of contents maps the contents of


the disk. The disk header contains a disk
!D. All data in the private region is
duplicated for extra reliability.

R
radio buttons
A set ol'buuons used to
select optional settings. Only one radio
button in the set can be selected at any
given time. These buttons toggle on or ofT.

A window that
displays detailed information about a
selected object.
properties

quotas file The quotas commands read


and write the external quotas file to get or
change usage limits. When quotas are
turned on. the quota limits are copied from
the external quotas file to the internal
quotas file. See external quotasfile
ill/emu/ quotastile. and quotas,

window

public region
A region of a physical
disk managed by VxVM that contains
available space and is used tor allocating
subdisks.

Q
A regular VxFS file that is
accessed using the : : cdev : vxf s :
extension.

Quick 110 file

Quick 1'0 is a
VERITAS File System feature that
improves database performance by
minimizing read/write locking and
eliminating double buffering of data. This
allows online transactions to be processed
at speeds equivalent to that of using raw
disk devices. while keeping the
administrative benefits of file systems.
Quick I/O for Databases

VERITAS QuickLog is a high


performance mechanism for receiving and
storing intent log information for VxFS file
systems. Quickl.og increases performance
by exporting intent log information to a
separate physical volume.
QuickLog

Quota limits on system resources


for individual users for file and data block
usage on a fi le system. See hard limit and
soli limit.
quotas

A Redundant Array of Independent


Disks (RAID) is a disk array set up with
part of the combined storage capacity used
for storing duplicate information about the
data stored in that array. This makes it
possible to regenerate the data if a disk
failure occurs.
RAID

read-writeback
mode
A recovery
mode in which each read operation
recovers plex consistency for the region
covered by the read. Plex consistency is
recovered by reading data from blocks of
one plex and writing the data to all other
writable plcxcs.
reservation
An extent attribute
associated with preallocating space for a
file.

The disk containing the root


file system. This disk may be under VxVM
control.
root disk

root disk group


In versions ofVxVM
prior to 4.0. a special private disk group
had to exist on the system. The root disk
group was always named rootdg. This
requirement does not apply to VxVM 4..v
and higher.
root file system
The initial file system
mounted as part of the UNIX kernel startup
sequence.

Glossary-11
Copyright'L

2006

Symantcr-

Corporanon

All rights

re<;p.rvcri

root partition The disk region on which


thc root tile system resides.
root volume The VxVM volume that
contains the root tile system, if such a
volume is designated by the system
configuration.

shared disk group

A disk group in
which the disks are shared by multiple
hosts (also referred to as a clustershareable disk group).

shared VM disk A VxVM

disk that
belongs to a shared disk group.

rootability

shared volume

then be mirrored to provide redundancy


and allow recovery in the event of disk
fai lure.

Shortcut menu

The ability to place the root


tile system and the swap device under
VxVM control. The resulting volumes can

rule A statementwritten

A volume that belongs


to a shared disk group and is open on more
than one node at the same time.
A context-sensitive
menu that only appears when you click a
specific object or area.

in the VERITAS
ISI' language that specifies how a volume
is to be created.

slave node A node that is not


designated as a master node.

slice The standard division of a logical


disk device. The terms partition and slice
arc sometimes used synonymously.

scroll bar A sliding control that is used


to display different
ora window.

portions of the contents

snapshot

A point-in-time

copy of a

volume (volume snapshot) or a tile system

Search window

The VEA search tool.


The Search window provides a set of

(tile system snapshot).

snapped file system

search options that can be used to search


for objects on the system,

A file system
whose exact image has been used to create
a snapshot ti Ie system.

secondary path In active/passive type


disk arrays, the paths to a disk other than
the primary path are called secondary

snapshot file system An exact copy of


a mounted file system at a specific point in
time. Used to do online backups. Scefi!e

paths. A disk is supposed to be accessed


only through the primary path until it fails,

SI'stCII/

after which ownership

of the disk is

transferred to one of the secondary paths.

sector

snapshot,

soft limit The soft limit is lower than a


hard limit. The soli limit can be exceeded
for a limited time. There are separate time
limits lor tiles and blocks. See hard fill/it

A unit of size, which can vary


between systems. A sector is commonly
512 bytes.

and quota.

sector size

A layout technique that


permits a volume (and its tile system or
database) that is too large to tit on a single

Sector size is an attribute of


a disk drive (or SCSI LUN till' an arraytype device) that is set when the drive is
formatted. Sectors are the smallest
addressable unit of storage on the drive,
and arc the units in which the device
performs I/O.

Glossary-12

spanning

disk to span across multiple

physical disks.

sparse plex A plex that is not as long as


the volume or that has holes (regions of the
plcx that do not have a backing subdisk ).

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals


Copvnqm " 20G() SyrT1cll't.;( Corporauon

All

JI<JhIS

rcserveu

splitter
;\ bar that separates two panes of
a window (such as the object tree and the
grid). A splitter can be used to adjust the
sizes of the panes.

An area of the main window


that displays an alert icon when an object
fails or experiences some other error.

status area

Storage Area Network (SAN)

;\

networking paradigm that provides easily


rcconfigurable connectivity between any
subset of computers, disk storage and
interconnecting hardware such as switches.
hubs and bridges.
storage

checkpoint

;\ facility that

provides a consistent and stable view of a


file system or database image and keeps
track of modi fied data blocks since the last
checkpoint.
storage

pool

;\ policy-based container

within a disk group in VxVM. for use by


ISP. that contains LUNs and volumes.
storage pool definition
;\ grouping of
template sets that defines the
characteristics of a storage pool. Applies to
the ISP feature ofVxVM.
pool policy Defines how a
storage pool behaves when more storage is
required, and when you try to create
volumes whose capabilities are not
permitted by the current templates. Applies
to the ISP feature ofVxVM.
storage

storage pool set ;\ bundled definition


of the capabilities ofa data pool and its
clone pools. Applies to the ISP feature of
VxVM.
stripe A set of stripe units that occupy
the same positions across a series of
columns.
stripe size The SUIll of the stripe unit
sizes that compose a single stripe across all
columns being striped.

Equally sized areas that arc


allocated alternatelv on the subdisks
(within columns) 0'1' each striped plcx. In
an array. this is a set oflogically
contiguous blocks that exist on each disk
before allocations are made from the next
disk in the array. ;\ stripe IIl1i! may also be
referred to as a stripe element.

stripe unit

The size or each stripe


unit. The default stripe unit size is 32
sectors ( I ()K). /\ stripe unit si:e has also
historically been referred to as a stripe

stripe unit size

width.
striping
/\ layout technique that spreads
data across several physical disks using
stripes. The data is allocated alternately to
the stripes within the suhdisks of each
plex.
fileset
A special filesct that
stores the structural elements of a VxFS
file system in the form of structural files.
These files define the structure or the file
system and arc visible only when using
utilities such as the file system debugger.
structural

;\ consecutive set of contiguous


disk blocks that form a logical disk
segment. Subdisks can be associated with
plcxes to form volumes.
subdisk

A block containing critical


information about the file system such as
the file system type, layout, and size. The
VxFS super-block is always located 8192
bytes from the beginning of the file system
and is 8192 bytes long.
super-block

swap area A disk region used to hold


copies of memory pages swapped out by
the system pager process.
swap volume
;\ VxVM volume that is
configured for use as a swap area.
writes
A form or
synchronous I/O that writes the lile data to
disk. updates the inode times, and writes

synchronous

Glossary-t
Copyright

{. 2006

Svmantec

Corpoeauon

All rights

reserved

the updated inode to disk. When the write


returns to the caller, both the data and the
inode have been written to disk.

T
task properties window A window
that displays detailed information about a
task listed in the Task Request Monitor
window.
Task Request Monitor A window that
displays a history of tasks performed in the
current VEA session. Each task is listed
with the task originator. the task status. and
the start/ finish times for the task.
T8 Terabyte (2~o bytes or 1024
gigabytes ).
template A meaningful collection oflS!'
rules that provide a capability for a
volume. Also known as a volume template.
template set consists of related
capabilities and templates that have been
collected together for convenience to
create IS!' volumes.
throughput For tile systems, this
typically refers to the number of 1/0
operations in a given unit of time,
toolbar A set otbuuons used to access
VEA windows. These include another
main window. a task request monitor. an
alert monitor, a search window, and a
customize window.
transaction A set of configuration
changes that succeed or fail as a group.
rather than individually. Transactions arc
used internally to maintain consistent
configurations.
tree A dynamic hierarchical display of
objects on the system. Each node in the
tree represents a group of objects of the
same type.

u
ufs The UNIX tile system type. Used as
parameter in some commands.
UFS The UNIX tile system; derived from
the 4.2 Berkeley Fast File System.

unbuffered 1/0 JlO that bypasses the tile


system cache to increase I/O performance.
This is similar to direct l/O, except when a
tile is extended. For direct 1/0, the inode is
wriuen to disk synchronously; for
unbuffered 1/0. the inode update is
delayed. See buffered 110and direct I/O.
uninitialized disks Disks that are not
under Vx YM control.
user template Consists of related
capabilities and templates that have been
collected together till' convenience for
creating IS!' application \ olumcs.

v
VERITAS Cluster Server.

VCS

VEA
VERITAS Enterprise Administrator
graphical user interface.

VM disk A disk that is both under VxVM


control and assigned to a disk group. VM
disks are sometimes referred to as Volume
Mal/agel' disks or simply disks. In the
graphical user interface, VM disks are
represented iconically as cylinders labeled
D.
Volume Manager Storage
Administrator. an earlier version of the
VxVM C1UI used prior to VxVM version
3.5.

VMSA

file A small tile that is used


to store the host ID of the system on which
VxVM is installed and the values of
bootdg and defaultdg.
volboot

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

Glossary-14
CUl>ynght

_ 2001)

Svmautcc

Corporuuon

All nqhts

reserved

A virtual disk or entity that is


made up of portions of one or more
physical disks. i\ volume represents an
addressable range of disk blocks used by
applications such as file systems or
databases. A volume is a collection offrom
one to 32 plexes.

volume

The
volume configuration device (/ dev/vx/
conf ig) is the interface through which all
configuration changes to the volume
device driver are performed.
volume configuration

device

The VxVM
configuration daemon, which is
responsible for making changes to the
VxVM configuration. This daemon must
be running before VxVM operations can
be performed.

vxconfigd

The VERITAS File System type.


Used as a parameter in some commands.

vxfs

VxFS

VERITAS File System.

VxVM

V[RITAS

Volume Manager.

Dat:l on disk that


indicates the disk is under VxVM control.
The VxVM ID Block provides dynamic
VxVM private region location, GUID, and
other information.
VxVM ID block

The driver that


forms the virtual disk drive between the
application and the physical device driver
level. The volume device driver is
accessed through a virtual disk device
node whose character device nodes appear
in /dev/vx/rdsk, and whose block
device nodes appear in / dev/vx/ dsk.
volume device driver

The volume event


log device (/ dev/ vx/ event) is the
interface through which volume driver
events are reported to the utilities.

volume event log

A window
that displays a graphical view of a volume
and its components. The objects displayed
in this window are not automatically
updated when the volume's properties
change.

Volume Layout Window

A volume set allows several


volumes to be treated as a single object
with one logical 110 interface, Applies to
the ISP feature ofVxVM.

volume set

i\ meaningful
collection of ISP rules that provide a
capability for a volume. Also known as a
template.

volume template

Volume to Disk Mapping Window

window that displays a tabular view of


volumes and their underlying disks. This
window can also display details such as the
subdisks and gaps on each disk.

Glossary-15
Copyright

oi.:

2006

Symantec

Corporation.

All rights

reserved

VERITAS

Glossary-16
C()pyrlght;

Storage Foundation

2006 Svmnun-c Corl'or.:lllull AIlllqlns reservco

5.0 for UNIX: Fundamentals

Index
Files and Directories

/dev/vx/dsk 37

cos i-n

/dev/vx/rdsk 3-7

CDS disk 112

/etcdefauh/ts 6-6
'ctc/default/vxassist a-tu

CDS disk groups


converting disk groups 525

/ctc/filcsystems 317,319.320

CDS disk layout 111


cfgmgr 714

iele/fs/vxf,65
/erc/tstuh

chts 320

3-17, 320

ICle/rc.dirc:2.d/S02vxvm-recover
lelc/rc2.d/S50isisd

723

(,L1 2-16

cu commands

2-22

in VEA 218

/etc/system 27

cluster 2-8

/etc/vfs 66

Cluster File System 29

/ctc/vfsrab

/etc/vxclm

cluster group 310

317, 320

cluster management 33

26

InpliVRTS/bin

column 44

65

iopllVRTS/inslall

command line interface 216. 2-19

'logs 2-11

lopliVRTSlman

concatenated 314, 49

219

iopl/VRTSob/bin

Concatenated Mirrored 315. 49, 423

2-21

10plIVRTSvxfsisbin

concatenated volume 1-16, 43


creating 410

65

/sbin 65
Isbin 'fs 65
'usr/lib/fs.vxfs

116
advantages 47
disadvantages 47

concarcnution

65

/v ar/vx/isis/vx isis.log 223

configuration

backup and restoration 711

configuration

database 76

controller 17

address-length pair 64
AIX disk 1-4

creating a layered volume 4-18


creating a volume 3-12
CLI 3-12

AIX physical volume 14

errs 319

array 19

cron 612

cross-platform

data sharing 111

convening disk groups 525

backing lip the VxVM configuration

requirements for CDS disk groups 5-24

711

Bad Block Relocation Area 14


BBRA 14
block clustering 63
block device file 3-18
block-based allocation 63
bootdg 3-7, 333

D
data change object 322
data redundancy 116
databases on file systems 2-9
delaultdg 212, 37

Index-l
Copyright \G 2006 Svmautec Corporation

All nqtns reserved

defrugmentution
scheduling 6-12
dcfragmcnting

a lile system 6-11

deponing a disk group


and renaming 5-17
10 new host 5-17
VEA 5-18
destroying a disk group 3-33
lLl 3-33
VEA 3-33
devtsadrn 7-14
device path 3-25
dcvicetug 3-25
directory fragmental ion 6-9
dirty region logging 5-7
disaster recovery Inlro-10
disk
adding new 7-14
adding 10 a disk group in VEA 3-11
AIX 1-4
configuring
for VxVM 3-4
displaying summary infonnation 3-25
failing 7-4
forced removal 7-21
HP-UX 1-4
Linux 1-5
naming 1-7
recognizing by operating system 7-14
removing in VEA 3-32
replacing tailed in vxdiskadm 7-15
replacing in CLI 7-15
replacing in VEA 7-15
uninirializing
3-32
unrelocaring 7-26
viewing in ell 3-24
viewing informal ion about 3-26

disk access name 3-5


disk access record 1-14. 3-5
disk array 1-9
multiparhed

1-9

disk enclosure 2-12


disk failure 7-4
parlial7-22
permanent 7-7
resolving intermittent
temporary 7-7
disk failure handling
disk group

Index-2

7-4

adding a disk in VEA 3-11


clearing host locks 5-19
creating 3-8
creating in VEA 3-10
creating in vxdiskadm 3-9
definition
1-13
deporting 5-17
destroying 3-33
destroying in CLI 3-33
destroying in VEA 3-33
displaying deponed 3-28
displaying tree space in 3-28
displaying properties for 3-28
forcing an import 5-19
high availability
3-6
importing 5-19
importing and renaming 5-19
importing as temporary in ell 5-20
purpose 1-13, 3-6
renaming in VEA 5-22
reserved names 3-7
shared 3-10
disk group contigurauou

1-13

disk group I D 3-25


disk group properties
viewing 3-29
Disk Group Properties window 3-29
disk group versions 5-23
disk ID 3-25
disk initialization

3-4

disk layout 1-11


changing 3-4
disk media name 1-13. 3-5, 3-8, 76
dclaull1-13
disk media record 7-6
disk name 3-25
disk naming 3-8
AIX 1-8
HP-UX 1-7
Linux 1-8
Solaris 1-7
disk properties 3-27
disk replacement

7-13

disk spanning 1-15


failure 720

disk status
Deponed 3-26
Disconnected 3-26
External 326

VERITAS Storage Foundation 5,0 for UNIX: Fundamentals

Free 3-26
Imported 3-26
Not Initialized 3-26
online 3-24
online invalid 3-24
disk structure 1-3
Disk View window 4-17
disks
adding to a disk group 3-8
displaying detailed information
evacuating data 3-31
renarnmg 5-21
uninitialized 3-4

3-25

dynamic lUN
resizing 5-15
dynamic mulripathing 2-12.3-3

compared to intent log 620


file system
adding to a volume 316. 3-18
adding to a volume in ell 318
consistency checking 6-16
defragmcnring 611
file change log 620
lragmcntat ion 69
fragmentation rep(1I1S610
fragmentation types 69
intent log 615
intcnt log resizing 617
logging and performance 6-19
logging options 618
mounting at boot 320
resizing 5-14
resizing in VFA 512
resizing methods 5-11
file system free space
identifying 68

ENABLED

stare 7-19

file system type 316. 68

encapsulation 3-4

FlashSnap 28

enclosure 2-12

torced removal of a disk 721

enclosure-based naming 2-12


benefits 3-3

fragmentation 69
directory 69
extent 69

error disk status 7-6


evacuating a disk 3-31

free space pool 35

exclusive OR 4-6

fsadrn 514. 69. 610

EXT2 6-5

fsck 615. 616


tsck pass 317

EXT36-5
Extended File System 6-5
extended partition 15

extent 64

group name 325

extent fragmemat ion 6-9


extent-based allocation 63. 64
extents
defragmenting

611

HI'S 65
l lierarchical

F
r AilED

disks 74

rAILING

disks 74

hostid 325
hot rclocat ion
definition 722
failure detection 723
notification 723

FastResync 315

rct s-zo
fdisk 15
Fibre Channel 212
file change log 620

VERITAS

Storage Foundation

File System 65

high availability 28. 516


hOSI locks
clearing 5..19

5_0 for UNIX: Fundamentals


Copylighl

2006 Symantec Corporation. AJIrighls reserved

lndex-S

process 723
recovery 7-23
selecting space 724
unrclocating a disk 726

JFS265
Joumalcd File System 65
journaling

6-15

HPUX disk 1-4

K
kernel issues
and VxFS 2-7
I/O failure
identifying

7-4

importing a disk group


and renaming 519
torcing 5-19
VEA 5-20
initialize

layered volume 116. 4-18


advantages 4-19
creating 418
creating in ell 423
creating in VEA 4-23
disadvantages 4-19
preventing creation 315
viewing in ell 4-24
viewing in VEA 4-24

zero 3-15

inodc 6-4
insf 714
installation

log tile 211

installation

menu 210

installer 2-11

layered volume layouts 4-22

installlS 2-11

licensing 25
generating a license key 26
obtaining a license key 25
vl.icense 2-6

installing V"VM 2-10


package space requirements 27
verifying on AIX 2-15
verifying on HPUX 214
veri lying on Linux 215
veri lying on Solaris 214
verifying package installation 214

Linux

disk 1-5

listing installed packages 2-14


load balancing 4-7

installp 211

location code 1-8

installs!' 211

logging 315. 57
and VxFS performance 6-19
for mirrored volumes 57

installvm

211

Intelligent

Storage Provisioning

3-10. 322

logging options

intent log
resizing 6-17

for a file system 6-18


logical unit number 1-7

intent logging 615


interfaces 2-16
command line inter/ace 216
VERIT AS Enterprise Administrator
vxdiskadm 2-16

Logical Volume Manager 14


logtype 4-12
216

lsdcv 7-14
IsIS 320

intermittent disk failure


resolving 7-20

Islpp 215

ioscan 714

LUN 17
and resizing V"VM

iosize 6-13

LVM

JFS 1-5. 6-5

man 2-19

lndex-d

VERITAS

structures 5-15

1-4

Storage Foundation

COPlIfIgt1t 2(106Svrnantec Ccrporanon All nghts reservco

5,0 for UNIX: Fundamentals

manual pages 219

nodg 37
nostripe 410

maxfilcsize 613
metadara 63

mirror
adding 55
adding in CLI 56
adding in VEA 56
removing 55

Object Data Manager 18

mirror-concat

online invalid status 324

off-host processing 28
online disk status 76

422

mirrored volume 116, 45


creating 412

online status 324


operating system versions 23

mirroring 116
advantages 48
disadvantages 4-8
enhanced 4-18
mirroring a volume 3-15, 4-9

ordered allocat ion 427


order of columns 428
order of mirrors 428
organization principle 310

mirrors
adding 412
mirror-stripe

layout 4-20

rnkfs 318

packages
listing 214
space requirements 27

mkfs options 67

parity 1-16,46

mkdir 3-18

mrnap 6-14

partial disk iailure 7-22

mount 3-18, 618

partition

mount at boot 3-17


CLI 3-20

PAl1165
permanent disk failure 77

mount opt ions


delaylog 6-18
log 618
tmplog 6-18

physical disk
naming 17
Physical Volume Reserved Area 14
pkgadd 211

mount poi nt 317

pkginfo

moving a disk
vxdiskadm 515

214

plex 114.45
definition 114
naming 1-14

multipathed disk array 19

plcx name
default 114

Preferences window 217

naming disks
defaults 3-8

primary partition 15
pri vate region 111, 34. 7-4

ncol 4-11

private region size 111


AIX 111
HPUX 111
Linux 111
Solaris 111

New Volume wizard 3-13


newfs 318
nlog 4-12
nmirror 4-12
node 28

projection 417

NODEVICE

VERITAS

17

state 721

Storage Foundation

prtvtoc 714

5.0 for UNIX: Fundamentals


Copyright~:

2006

Svmantec

Como-anon

All rights

lndex-S
reserved

public region 1-11. 1-13. 7-4

resizing a dynamic LUN 5-15

PVRA 1-4

rcsizing a file system 514

rcsizing a volume 5-10


VEA 5-12
with vxnssist 5-14
with vxresize 5-13

Quick 1/0 2-9

resiziug a volume and file system 5-11


rcsizing a volume with a file system 5-10

response file 211


RAID 1-15

rliuk

RAID array
benefits with VxVM Intro-10

rpm 2-11. 2-15

RAID levels 1-15

RAID-5 column 4-6

3-22

RAID-5 volume 1-16.4-6

S95vx\'Jll-recover 7-23

random read 6-14

SAN 2-12

random write 6-14

SA N management 3-3

raw device tile 3-18

selected plex read policy 5-8

read policy 58
changing in CLI 5-9
changing in VEA 5-9
preferred plcx 58
round robin 5-8
selected pic" 58

sequential read 6-14


sequential write 6-14
size of a volume 3-14
slice 1-7
sliced disk 1-12
snap object 3-22

recovering a volume
VEA 7-16

software

packages 27

recovering

space requirements 2-7

redundancy 1-16

spare disks
managing 7-25

volumes
and volume states 7-19

relocating subdisks 7-24

STALE state 7-19

REMOVED state 7-21

storage
allocating lor volumes 4-25

removing a disk
forced 7-21
VEA 3-32

storage area network 2-12

renaming a disk 5-21

storage attributes
specifying lor volumes 4-25
specifying in VEA 4-26

renaming a disk group 522

storage cache 3-22

replacing a disk 7-13


CLI 715
VEA 715

stripe unit 4-4. 4-6


default si/e 3-14. 4-9

replacing a failed disk


vxdiskadm 715

Striped Mirrored 3-15. 4-9. 4-23

removing a volume 330

striped 3-14. 4-9

replicated volume group 322

striped volume 1-16,4-4


creating 4-11

Rescun option 7-14

stripe-mirror 4-22

resilience 1-16

stripe-mirror layout 4-21

resilient volume 1-16

stripeuuit 4-11

lndex-G

VERITAS
Copl/flght G 2006

SYl'1311tCC

Storage Foundation
Corporation All fights reserved

5,0 for UNIX: Fundamentals

striping 116
advantages 47
disadvantages 48
subdisk 114
definition 114
subdisk name
default 114
418

subvolume

summary fi le 211
support for V x V M 24
swinstall z-t t
swlist 214

T
target 17
Task History window 218
tasks
clearing history 218
technical support for VxVM

24

temporary disk failure 77


true mirror 45
true mirroring

116

type 325

destroying a disk group 333


disk properties 3-27
Disk View window 417
displaying volumes 323
importing a disk group 520
installing 221
installing the serv er and client 221
monitoring events and tasks 223
multiple views of objects 217
recovering a volume 716
remote administration 217
removing a disk 3-32
replacing a disk 715
resizing a volume 512
scanning disks 714
security 217
setting preferences 217
starting 222, 223
lask History window 218
viewing a layered volume 424
viewing CLI commands 218
viewing disk group properties 329
viewing disk information 326
Volume Layout window 414
Volume to Disk Mapping window 415
Volume View window 416
VrRITAS

Cluster Fik System 29

VERITAS

Cluster Server 28

VLRITAS

Enterprise Administrator

UFS 65
allocation 63

VLRITAS

hie System 65

VERITAS

Quick 1'0 for Databases 29

un initialized disks 3-4

VFRITAS

Volume Manager 2-9

UNIX File System 65

VrRIT AS Volume Replicator lntro-f O, 322

unrelocating a disk 726


upgrading a disk group version 523

versrorung
and disk groups 523

user interfaces 216

V(JDA 15

virtual storage objects 110


vl.icensc 26

VEA 216
adding a disk to a disk group 311
adding a mirror 56
changing volume read policy 59
clearing task history 218
creating a disk group 310
creating a layered volume 423
creating a volume 313
deporting a disk group 518

vol subdisk num 114

216.217

VCiRA 14

VERITAS

Storage Foundation

volboot 37
volume 110. 35
accessing 110
adding a tile system 3-16
adding a Ilk system in CLl 318
adding a minor 55
adding a mirror in VEA 56
creating 312

lndex-?

5.0 for UNIX' Fundamentals


Copyrighllf', 2006 Svmantec Corporation

All

rights reserved

creating a layered volume 4-18


creating in Cl.l 3-12
creating in VEA 3-13
creating layered in CLI 4-23
creating layered in VEA 4-23
creating mirrored and logged 4-12
creating on specific disks 4-26
definition 1-10, 1-14
disk requirements 3-12
estimating size 4-13
expanding the size 5-10
layered layouts 4-22
logging 3-15
mirroring 3-15, 4-9
recovering in VEA 7-16
reducing the size 5-10
removing 3-30
removing a mirror 5-5
rcsizing 5-10
resizing in VEA 5-12
rcsizing methods 5-11
resizing with vxassist 5-14
resizing with vxrcsizc 5-13
specifying ordered allocation 4-27
specifying storage attributes in VEA 4-26
starting manually 5-20
viewing layered in CLI 4-24
viewing layered in VEA 4-24
Group Descriptor

Volume Replicator 2-8


volume size 3-14
volume states
utter attaching disk media 7-18
after recovering volumes 7-19
alter running vxreauuch 7-12
alter temporary disk failure 7-12
after volume recovery 7-19
Volume to Disk Mapping window 4-15
Volume View window 4-16
volumes
allocating storage for 4-25
vrtsadm 2-22
VRTSap 2-7
VRTStj'pm

2-7

VRTShdoc

2-7

VRTSfspro

2-7

VRTSmuob

2-7

\,RTSob

2-7

VRTSobadmin
VRTSobgui

2-7

2-7

VRTStep 2-7
\'RTSvmdoc

2-7

Vk TSvmman 2-7

volume attributes 3-13


Volume

changing in VEA 5-9


volume recovery 7-13

\'RTSv mpro 2-7

Area 1-5

Volume Group Reserved Area 1-4

VRTSvxts

volume layout 1-15


concatenated 1-16
displaying in Cl.I 3-21
layered 1-16
mirrored 1-16
RAID-S 1-16
selecting 4-3
striped 1-16

v xassisr 3-12, 5-11, 5-14

vxassist growby 5-14


vxusxis: growro 5-14
vxussist shrinkby 5-14
vxussist shrinkto 5-14
vxbcnch
options 6-14
vxcunfigbackup

Volume Layout window 4-14


Volume Manager control 1-11

vxconfigrestore

Index-8

7-11
7-11

vxdctl enable 3-5, 3-11, 7-6, 7-14

Volume Manager Support Operations 2-16,220

volume read policy 5-8


changing in Cl.I 5-9

7 -11

vxconligbackupd

Volume Manager disk 1-13


naming 1-13

volume name
default 1-14

2-7

vxdg destroy 3-33


vxdisk list 3-9, 3-24, 3-25, 7-6, 7-14
vxdisk rcsize 5-15
vxdiskadm 2-16, 2-20, 3-4
creating a disk group 3-9
replacing a Jailed disk 7-15

VERITAS Storage Foundation 5.0 for UNIX: Fundamentals

starting 220
vxdiskunsetup 332
VxFS 65
allocation 63, 64
and logging 619
command locations 65
command syntax 66
dcfragmenting
611
file change log 620
file system switchout mechanisms 66
file system type 68
Iragmentarionreports 610
fragmentation types 69
idenii rying free space 68
intcnt log 615
intent log resizing 617
logging options 618
maintaining consistency 616
resizing 514
resizing in VEA 512
using by default 66
vxinstall 211
vxmake 419
vxprint 321, 424
vxreal1ach 716
vxrecover 716
vxrelocd 723
vxresize 511, 513
vxunreloc

7-26

VxVM
configuration backup 711
user interfaces 2-16
VxVM and RAID arrays lntrc-f O
VxVM configuration

daemon 3-5

vxvol rdpol prefer 59


vxvol rdpol round 5-9
vxvol rdpol select 5-9
vxvol stopall 5-18

x
XOR 116,4-6

VERITAS

Storage Foundation

5.0 for UNIX. Fundamentals


Copyright~'

2006

Symantec

Corporation.

All

Index-9
fignts

reserved

Index-10

VERITAS
Cupyrlgtlt

2006

Sym,H11CC

Storage Foundation
Corporation

All fights

reserved

5.0 for UNIX: Fundamentals

You might also like