Professional Documents
Culture Documents
VNX
Series
Release 7.0
Managing Volumes and File Systems with VNX
AVM
P/N 300-011-806
REV A02
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Copyright 1998 - 2011 EMC Corporation. All rights reserved.
Published September 2011
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
For the most up-to-date regulatory document for your product line, go to the Technical
Documentation and Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.
Corporate Headquarters: Hopkinton, MA 01748-9103
2 Managing Volumes and File Systems on VNX AVM 7.0
Contents
Preface.....................................................................................................7
Chapter 1: Introduction...........................................................................9
Overview................................................................................................................10
System requirements.............................................................................................10
Restrictions.............................................................................................................11
AVM restrictions..........................................................................................11
Automatic file system extension restrictions...........................................12
Thin provisioning restrictions...................................................................13
VNX for block system restrictions............................................................14
Cautions..................................................................................................................14
User interface choices...........................................................................................17
Related information..............................................................................................22
Chapter 2: Concepts.............................................................................23
AVM overview.......................................................................................................24
System-defined storage pools overview............................................................24
Mapped storage pools overview.........................................................................25
User-defined storage pools overview.................................................................26
File system and automatic file system extension overview............................26
AVM storage pool and disk type options..........................................................27
AVM storage pools .....................................................................................27
Disk types.....................................................................................................27
System-defined storage pools....................................................................30
RAID groups and storage characteristics................................................33
User-defined storage pools .......................................................................35
Storage pool attributes..........................................................................................35
Managing Volumes and File Systems on VNX AVM 7.0 3
System-defined storage pool volume and storage profiles.............................39
VNX for block system-defined storage pool algorithms.......................40
VNXfor block system-definedstorage pools for RAID5, RAID3,
and RAID 1/0 SATA support................................................................43
VNX for block system-defined storage pools for Flash support..........45
Symmetrix system-defined storage pools algorithm.............................46
VNX for block primary pool-based file system algorithm....................48
VNX for block secondary pool-based file system algorithm................50
Symmetrix mapped pool file systems......................................................51
File system and storage pool relationship.........................................................53
Automatic file system extension.........................................................................55
Thin provisioning..................................................................................................59
Planning considerations.......................................................................................59
Chapter 3: Configuring.........................................................................65
Configure disk volumes.......................................................................................66
Provide storage from a VNX or legacy CLARiiON system to a
gateway system......................................................................................67
Create pool-based provisioning for file storage systems.......................68
Add disk volumes to an integrated system.............................................70
Create file systems with AVM.............................................................................70
Create file systems with system-defined storage pools.........................72
Create file systems with user-defined storage pools..............................74
Create the file system..................................................................................78
Create file systems with automatic file system extension.....................81
Create file systems with the automatic file system extension
option enabled........................................................................................82
Extend file systems with AVM............................................................................84
Extend file systems by using storage pools.............................................85
Extend file systems by adding volumes to a storage pool....................87
Extend file systems by using a different storage pool...........................89
Enable automatic file system extension and options.............................91
Enable thin provisioning............................................................................96
Enable automatic extension, thin provisioning, and all options
simultaneously.......................................................................................98
Create file system checkpoints with AVM.......................................................100
Chapter 4: Managing..........................................................................103
List existing storage pools..................................................................................104
4 Managing Volumes and File Systems on VNX AVM 7.0
Contents
Display storage pool details...............................................................................105
Display storage pool size information.............................................................106
Display size information for Symmetrix storage pools.......................108
Modify system-defined and user-defined storage pool attributes...............109
Modify system-defined storage pool attributes....................................110
Modify user-defined storage pool attributes.........................................113
Extend a user-defined storage pool by volume..............................................118
Extend a user-defined storage pool by size.....................................................119
Extend a system-defined storage pool.............................................................120
Extend a system-defined storage pool by size......................................121
Remove volumes from storage pools...............................................................122
Delete user-defined storage pools.....................................................................123
Delete a user-defined storage pool and its volumes............................124
Chapter 5: Troubleshooting................................................................125
AVM troubleshooting considerations...............................................................126
EMC E-Lab Interoperability Navigator............................................................126
Known problems and limitations.....................................................................126
Error messages.....................................................................................................127
EMC Training and Professional Services.........................................................128
Glossary................................................................................................129
Index.....................................................................................................133
Managing Volumes and File Systems on VNX AVM 7.0 5
Contents
6 Managing Volumes and File Systems on VNX AVM 7.0
Contents
Preface
As part of an effort to improve and enhance the performance and capabilities of its product lines,
EMCperiodically releases revisions of its hardware and software. Therefore, some functions described
in this document may not be supported by all versions of the software or hardware currently in use.
For the most up-to-date information on product features, refer to your product release notes.
If a product does not function properly or does not function as described in this document, please
contact your EMC representative.
Managing Volumes and File Systems on VNX AVM 7.0 7
Special notice conventions
EMC uses the following conventions for special notices:
Note: Emphasizes content that is of exceptional importance or interest but does not relate to personal
injury or business/data loss.
Identifies content that warns of potential business or data loss.
Indicates a hazardous situation which, if not avoided, could result in minor or
moderate injury.
Indicates a hazardous situation which, if not avoided, could result in death or
serious injury.
Indicates a hazardous situation which, if not avoided, will result in death or serious
injury.
Where to get help
EMC support, product, and licensing information can be obtained as follows:
Product information For documentation, release notes, software updates, or for
informationabout EMCproducts, licensing, andservice, go to the EMCOnline Support
website (registration required) at http://Support.EMC.com.
Troubleshooting Go to the EMC Online Support website. After logging in, locate
the applicable Support by Product page.
Technical support For technical support and service requests, go to EMCCustomer
Service on the EMC Online Support website. After logging in, locate the applicable
Support by Product page, and choose either Live Chat or Create a service request. To
open a service request through EMC Online Support, you must have a valid support
agreement. Contact your EMCsales representative for details about obtaining a valid
support agreement or with questions about your account.
Note: Do not request a specific support representative unless one has already been assigned to
your particular system problem.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications.
Please send your opinion of this document to:
techpubcomments@EMC.com
8 Managing Volumes and File Systems on VNX AVM 7.0
Preface
1
Introduction
Topics included are:
Overview on page 10
Restrictions on page 11
Cautions on page 14
VNX
storage
system, use regular Symmetrix volumes (also called hypervolumes), not Symmetrix
metavolumes.
Use AVMto create the primary EMCTimeFinder
.
The options associated with automatic extension can be modified only on file systems
mounted with read/write permission. If the file system is mounted read-only, you must
remount the file system as read/write before modifying the automatic file system
extension, HWM, or maximum size options.
Enabling automatic file system extension and thin provisioning does not automatically
reserve the space from the storage pool for that file system. Administrators must ensure
that adequate storage space exists, so that the automatic extension operation can succeed.
When there is not enough storage space available to extendthe file systemto the requested
size, the file system extends to use all the available storage.
For example, if automatic extension requires 6 GB but only 3 GB are available, the file
system automatically extends to 3 GB. Although the file system was partially extended,
an error message appears to indicate that there was not enough storage space available
to perform automatic extension. When there is no available storage, automatic extension
fails. You must manually extend the file system to recover from this issue.
Automatic file systemextensionis supportedwithEMCVNXReplicator. Enable automatic
extension only on the source file system in a replication scenario. The destination file
12 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
systemsynchronizes with the source file systemandextends automatically. Do not enable
automatic extension on the destination file system.
When using automatic extension and thin provisioning, you can create replicated copies
of extendible file systems, but to do so, use slice volumes (slice=y).
Youcannot create iSCSI thick LUNs onfile systems that have automatic extensionenabled.
You cannot enable automatic extension on a file system if there is a storage mode iSCSI
LUN present on the file system. You will receive an error, "Error 2216: <fs_name>: item
is currently in use by iSCSI." However, iSCSI virtually provisioned LUNs are supported
on file systems with automatic extension enabled.
Automatic extension is not supported on the root file system of a Data Mover or on the
root file system of a Virtual Data Mover (VDM).
Thin provisioning restrictions
The restrictions applicable to the thin provisioning feature are as follows:
VNX for file supports thin provisioning on Symmetrix DMX
CX4
For VNX for block systems: thin LUNs and thick LUNs, compression, auto-tiering, and
mirroring (EMC MirrorView
or RecoverPoint).
For Symmetrix systems: thin LUNs and thick LUNs, auto-tiering, and R1, R2, or BCV
disk volumes.
To create file systems, use one or more types of AVM storage pools:
thin
thick
auto-tiering
mirrored
VNX compression
However, ensure that the mapped pool contains only the same type of LUNs that use the
same data services for the best file system performance:
all thick
all thin
Enable and disable it at any later time by modifying the file system.
The options that work with automatic file system extension are as follows:
HWM
Maximum size
Thin provisioning
The HWM and maximum size are described in Automatic file system extension on page 55.
Thin provisioning is described in Thin provisioning on page 59.
26 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
The default supported maximum size for any file system is 16 TB.
With automatic extension, the maximumsize is the size to which the file systemcould grow,
up to the supported 16 TB. Setting the maximum size is optional with automatic extension,
but mandatorywiththinprovisioning. Withthinprovisioningenabled, users andapplications
see the maximumsize, while only a portion of that size is actually allocated to the file system.
Automatic extension allows the file systemto growas neededwithout systemadministrator
intervention, and meet systemoperations requirements continuously, without interruptions.
AVM storage pool and disk type options
AVM provides a range of options for managing volumes. The VNX system can choose the
configuration and placement of the file systems by using system-defined storage pools, or
you can create a user-defined storage pool and define its attributes.
This section contains the following:
VNX for block storage pools are available for attached VNX for block storage systems.
Symmetrix storage pools are available for attached Symmetrix storage systems.
System-defined storage pools are dynamic by default. The AVM feature adds and removes
volumes automatically from the storage pool as needed. Table 5 on page 31 lists the
system-defined storage pools supported on the VNX for file. RAID groups and storage
characteristics on page 33 contains additional information about RAID group combinations
for system-defined storage pools.
Note: A storage pool can include disk volumes of only one type.
30 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Table 5. System-defined storage pools
Description Storage pool name
Designed for high performance and availability at medium cost. This storage
pool uses STD disk volumes (typically RAID 1).
symm_std
Designed for high performance and availability at low cost. This storage
pool uses ATA disk volumes (typically RAID 1).
symm_ata
Designed for high performance and availability at medium cost, specifically
for storage that will be mirrored to a remote VNX for file that uses SRDF, or
symm_std_rdf_src
to a local VNX for file that uses TimeFinder/FS. Using SRDF/S with VNX
for Disaster Recovery and Using TimeFinder/FS, NearCopy, and FarCopy
on VNX for File provide more information about the SRDF feature.
Designed for high performance and availability at medium cost, specifically
as a mirror of a remote VNX for file that uses SRDF. This storage pool uses
symm_std_rdf_tgt
Symmetrix R2STD disk volumes. Using SRDF/S with VNX for Disaster Re-
covery provides more information about the SRDF feature.
Designed for archival performance and availability at low cost, specifically
for storage mirrored to a remote VNX for file that uses SRDF. This storage
symm_ata_rdf_src
pool uses Symmetrix R1ATA disk volumes. Using SRDF/S with VNX for
Disaster Recovery provides more information about the SRDF feature.
Designed for archival performance and availability at low cost, specifically
as a mirror of a remote VNX for file that uses SRDF. This storage pool uses
symm_ata_rdf_tgt
Symmetrix R2ATA disk volumes. Using SRDF/S with VNX for Disaster Re-
covery provides more information about the SRDF feature.
Designed for very high performance and availability at high cost. This storage
pool uses Flash disk volumes (typically RAID 5).
symm_efd
Designed for high performance and availability at low cost. This storage
pool uses CLSTD disk volumes created from RAID 1 mirrored-pair disk
groups.
clar_r1
Designed for high availability at low cost. This storage pool uses CLSTD
disk volumes created from RAID 6 disk groups.
clar_r6
Designed for medium performance and availability at low cost. This storage
pool uses CLSTD disk volumes created from 4+1 RAID 5 disk groups.
clar_r5_performance
Designed for medium performance and availability at low cost. This storage
pool uses CLSTD disk volumes created from 8+1 RAID 5 disk groups.
clar_r5_economy
Designed for use with infrequently accessed data, such as archive retrieval.
This storage pool uses CLATA disk drives in a RAID 5 configuration.
clarata_archive
AVM storage pool and disk type options 31
Concepts
Table 5. System-defined storage pools (continued)
Description Storage pool name
Designed for archival performance and availability at low cost. This AVM
storage pool uses LCFC, SATA II, and CLATA disk drives in a RAID 3 con-
figuration.
clarata_r3
Designed for high availability at low cost. This storage pool uses CLATA
disk volumes created from RAID 6 disk groups.
clarata_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CLATA disk volumes in a RAID 1/0 configuration.
clarata_r10
Designed for medium performance and availability at medium cost. This
storage pool uses VNX Serial Attached SCSI (SAS) disk volumes created
from RAID 5 disk groups.
clarsas_archive
Designed for high availability at medium cost. This storage pool uses CLSAS
disk volumes created from RAID 6 disk groups.
clarsas_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CLSAS disk volumes in a RAID 1/0 configuration.
clarsas_r10
Designed for very high performance and availability at high cost. This storage
pool uses CLEFD disk volumes created from 4+1 and 8+1 RAID 5 disk
groups.
clarefd_r5
Designed for high performance and availability at medium cost. This storage
pool uses two CLEFD disk volumes in a RAID 1/0 configuration.
clarefd_r10
Designed for high performance and availability at low cost. This storage
pool uses CMSTD disk volumes created from RAID 1 mirrored-pair disk
groups for use with MirrorView/Synchronous.
cm_r1
Designed for medium performance and availability at low cost. This storage
pool uses CMSTD disk volumes created from 4+1 RAID 5 disk groups for
use with MirrorView/Synchronous.
cm_r5_performance
Designed for medium performance and availability at low cost. This storage
pool uses CMSTD disk volumes created from 8+1 RAID 5 disk groups for
use with MirrorView/Synchronous.
cm_r5_economy
Designed for high availability at low cost. This storage pool uses CMSTD
disk volumes created from RAID 6 disk groups for use with MirrorView/Syn-
chronous.
cm_r6
Designed for use with infrequently accessed data, such as archive retrieval.
This storage pool uses CMATA disk drives in a RAID 5 configuration for use
with MirrorView/Synchronous.
cmata_archive
32 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Table 5. System-defined storage pools (continued)
Description Storage pool name
Designed for archival performance and availability at low cost. This AVM
storage pool uses CMATA disk drives in a RAID 3 configuration for use with
MirrorView/Synchronous.
cmata_r3
Designed for high availability at low cost. This storage pool uses CMATA
disk volumes created from RAID 6 disk groups for use with MirrorView/Syn-
chronous.
cmata_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CMATA disk volumes in a RAID 1/0 configuration for use with
MirrorView/Synchronous.
cmata_r10
Designed for medium performance and availability at medium cost. This
storage pool uses CMSAS disk volumes created from RAID 5 disk groups
for use with MirrorView/Synchronous.
cmsas_archive
Designed for high availability at low cost. This storage pool uses CMSAS
disk volumes created from RAID 6 disk groups for use with MirrorView/Syn-
chronous.
cmsas_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CMSAS disk volumes in a RAID 1/0 configuration for use with
MirrorView/Synchronous.
cmsas_r10
Designed for very high performance and availability at high cost. This storage
pool uses CMEFD disk volumes created from 4+1 and 8+1 RAID 5 disk
groups for use with MirrorView/Synchronous.
cmefd_r5
Designed for high performance and availability at medium cost. This storage
pool uses two CMEFD disk volumes in a RAID 1/0 configuration for use with
MirrorView/Synchronous.
cmefd_r10
RAID groups and storage characteristics
The following table correlates the storage array to the RAID groups for system-defined
storage pools.
AVM storage pool and disk type options 33
Concepts
Table 6. RAID group combinations
RAID 1 RAID 6 RAID 5 Storage
1+1 RAID 1/0 4+2 RAID 6 2+1 RAID 5
3+1 RAID 5
4+1 RAID 5
5+1 RAID 5
NX4 SAS or
SATA
1+1 RAID 1 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
8+1 RAID 5
NS20 /
NS40 /
NS80 FC
Not supported 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
6+1 RAID 5
8+1 RAID 5
NS20 /
NS40 /
NS80 ATA
1+1 RAID 1/0 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
8+1 RAID 5
NS-120 /
NS-480 /
NS-960 FC
1+1 RAID 1/0 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
6+1 RAID 5
8+1 RAID 5
NS-120 /
NS-480 /
NS-960 ATA
1+1 RAID 1/0 Not supported 4+1 RAID 5
8+1 RAID 5
NS-120 /
NS-480 /
NS-960 EFD
1+1 RAID 1/0 4+2 RAID 6
6+2 RAID 6
3+1 RAID 5
4+1 RAID 5
6+1 RAID 5
8+1 RAID 5
VNX SAS
Not supported 4+2 RAID 6
6+2 RAID 6
Not supported VNX NL SAS
34 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
User-defined storage pools
For some customer environments, more user control is required than the system-defined
storage pools offer. One way for administrators to have more control is to create their own
storage pools and define the attributes of the storage pool.
AVM user-defined storage pools allow you to have more control over how the storage is
allocatedto file systems. Administrators can create a storage pool. They can also addvolumes
to the storage pool either by manually selecting and building the volume structure, or by
auto-selection, extending the storage pool with newvolumes when required, and removing
volumes that are no longer required in the storage pool.
Auto-selection is performed by choosing a minimumsize and a systempool which describes
the disk attributes. With auto-selection, whole disk volumes are taken from the volumes
available in the system pool and placed in the user pool according to the selected stripe
options. The auto-selection uses the same AVM algorithms that choose which disk volumes
to stripe in a system pool. When extending a user-defined storage pool, AVM references the
last pool member's volume structure andmakes the best effort to keepthe underlying volume
structures consistent. System-defined storage pool volume and storage profiles on page 39
contains additional information.
While user-defined storage pools have attributes similar to system-defined storage pools,
user-defined storage pools are not dynamic. They require administrators to explicitly add
and remove volumes manually.
If you define the storage pool, you must also explicitly add and remove storage from the
storage pool and define the attributes for that storage pool. Use the nas_pool command to
do the following:
VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 SATA
support on page 43
VNX for block system-defined storage pools for Flash support on page 45
The is_greedy setting indicates if the system-definedstorage pool must adda newmember
volume to meet the request, or if it must use all the available space in the storage pool
before adding a new member volume. AVM then checks the is_dynamic setting.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough
free space on the existing volumes that the file system is using. Table 7 on page 36 describes the
is_greedy behavior.
The is_dynamic setting indicates if the storage pool can dynamically grow and shrink:
40 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
If set to yes, then it allows AVM to automatically add a member volume to meet the
request.
If set to no, and a member volume must be added to meet the request, then the user
must manually add the member volume to the storage pool.
The flag that requests a file system slice indicates if the file system can be built on a slice
volume from a member volume.
The default_slice_flag setting indicates if AVM can slice storage pool member volumes
to meet the request.
Most of the system-defined storage pools for VNX for block storage systems first search for
four same-size disk volumes fromdifferent buses, different SPs, and different RAIDgroups.
The absolute criteria that the volumes must meet are as follows:
Disk volume must match the type specified in the storage profile of the storage pool.
No two disk volumes can come from the same RAID group.
Disk volume must match the type specified in the storage profile of the storage pool.
If multiple volumes match the first two criteria, then the disk volume must be from the
least-used RAID group.
System-defined storage pool volume and storage profiles 41
Concepts
Figure 2 on page 42 shows the algorithmusedto select disk volumes to addto a pool member
in an AVM VNX for block system-defined storage pool, which is either clar_r1,
clar_r5_performance, or clar_r5_economy.
1 disk volume
available?
CNS-000885
Request
Error.
Unable to fill
request
Done
Insert stripe
intothe
storage pool
Meets absolute
criteria for multiple
disk volumes?
Slice fromstripe
(smaller of free
space available
or file system
request)
Place meta
volume on
the stripe
Place disk
volumes in pool
(no stripe or
meta on top)
Least used definedby # of disk volumes used in
RAIDgroup/# disk volumes visible in RAIDgroup
No
Meets
absolute criteria
for 1 disk
volume
4/3/2 disk
volumes
available?
Is space in
pool enough?
Yes
1
1
Stripe volumes
together using
8 K stripe size
Select volumes
balanced across
buse s
balanced across
storage processor s
from least used
RAID groups
that are:
Yes
Yes
No
Yes Yes
No
No
No
Figure 2. clar_r1, clar_r5_performance, and clar_r5_economy storage pools algorithm
Figure 3 on page 42 shows the structure of a clar_r5_performance storage pool. The volumes
in the storage pools are balanced between SP A and SP B.
VNX-000015
dz dn dx dy
stripe_volume1 stripe_volume2
clar_r5_performance
storage pool
VNX
4+1 RAID 5 disk
volumes
dw
3
dm
3
Owned by
storage
processor A
Owned by
storage
processor B
Figure 3. clar_r5_performance storage pool structure
42 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID
1/0 SATA support
The three VNX for block system-defined storage pools that provide support for the SATA
environment are clarata_archive (RAID 5), clarata_r3 (RAID 3), and clarata_r10 (RAID 1/0).
The clarata_r3 storage pool follows the basic VNX for block algorithm explained in
System-defined storage pool volume and storage profiles on page 39, but uses only one
disk volume and does not allow striping of volumes. One of the applications for this pool
is backup to disk. Users can manage the RAID 3 disk volumes manually in a user-defined
storage pool. However, usingthe system-definedstorage pool clarata_r3 helps users maximize
the benefit from AVM disk selection algorithms. The clarata_r3 storage pool supports only
VNX for block Capacity drives, not Performance drives.
The criteria that the one disk volume must meet are as follows:
Disk volume must match the type specified in the storage profile of the storage pool.
If multiple volumes match the first two criteria, then the disk volume must be from the
least-used RAID group.
System-defined storage pool volume and storage profiles 43
Concepts
Figure 4 on page 44 shows the storage pool clarata_r3 algorithm.
1 disk volume
available?
Meets
absolute criteria
for 1 disk
volume.
Create meta on
disk volume.
Place meta in
storage pool.
Request
Done
CNS-000886
Error.
Unable to fill
request.
Error.
Unable to fill
request.
Yes
Yes
Yes
No
No
1
Figure 4. clarata_r3 storage pool algorithm
The storage pools clarata_archive and clarata_r10 differ from the basic VNX for block
algorithm. These storage pools use two disk volumes or a single disk volume, andall Capacity
drives are the same.
44 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Figure 5 on page 45 shows the profile algorithm used to select disk volumes by using either
the clarata_archive or clarata_r10 storage pool.
Receive
request
Put request on meta
Concatenate slices
together (if
necessary)
Request new pool
volume made of N
disk volumes
Sort N pool volumes
by utilization
Slice minimum of free
space available or
space needed
from pool entry
Space req
satisfied?
Disk volume
available?
Other Pool
volumes
available?
Error: Unable to
fill request
Done
N = 2
Pick first
entry
One volume created
Creation failed
Yes Yes
Yes
No No
No
No Yes, N = 1
CNS-000783
1
Pool
volume
created in
1?
Figure 5. clarata_archive and clarata_r10 storage pools algorithm
VNX for block system-defined storage pools for Flash support
The VNX for file provides the clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage
pools for Flash drive support on the VNX for block storage system. AVM uses the same disk
selection algorithmandvolume structure for each Flash pool. However, the algorithmdiffers
fromthe standardVNXfor block algorithmexplainedinSystem-definedstorage pool volume
and storage profiles on page 39 and is outlined next. The algorithm adheres to EMC best
practices to achieve the overall best performance and use of Flash drives. Users can also
manually manage Flash drives in user-defined pools.
The AVMalgorithmusedfor disk selection andvolume structure for all Flash system-defined
pools is as follows:
1. The LUN creation process is responsible for storage processor balancing. By default, run
the setup_clariion commandon integrated systems to set up storage processor balancing.
2. Use a default stripe width of 256 KB (provided in the profile). The stripe member count
in the profile is ignored and should be left at 1.
3. When two or more LUNs of the same size are available, always stripe LUNs. Otherwise,
concatenate LUNs.
4. No RAID group balancing or RAID group usage is considered.
5. No order is applied to the LUNs being striped together except that all LUNs from the
same RAID group in the stripe will be next to each other. For example, storage processor
balanced order is not applied.
6. Use a maximum of two RAID groups from which to take LUNs:
System-defined storage pool volume and storage profiles 45
Concepts
If only one RAID group is available, use every same size LUN in the RAID group.
This maximizes the LUN count and meets the size requested.
a.
b. If only two RAIDgroups are available, use every same size LUNin each RAIDgroup.
This maximizes the LUN count and meets the size requested.
Figure 6 on page 46 shows the profile algorithm used to select disk volumes by using either
the clarefd_r5, clarefd_r10, cmefd_r5, or cmefd_r10 storage pool.
Done
User requests
EFD FS
Sort all EFD dVols
into size buckets
Logically construct all possible
stripes that closest fit the OSEC
Sort all possible stripes
by capacity (descending)
within size buckets
If still more than 1 best stripe
sort the best stripes
by # of RGs (descending)
preferring 2 over 1
If still more than 1 best stripe
just use the first one
Create slice of
requested FS size
Find an existing stripe
that has available space
and create a new slice
Choose the size bucket
with the smallest available
capacity that meets request
Concatenate with
additional existing
stripes if necessary
Concatenate stripes
until requested
size is met
Done
Done
Done
Done
For each
size bucket
Determine optimum
stripe element count
=
#dVols/round(#RGs/2)
Sort all possible stripes
(that are big enough)
by # of dVols (descending)
and use the widest
Choose the size bucket
with the largest
available capacity
Concatenate stripes until
requested size is met or
all stripes are used
Concatenate with existing
stripes until requested
FS size is met
Next size bucket
CNS-001556
NO YES
YES
YES NO
NO YES
Out of free dVols
N
e
x
t
s
i
z
e
b
u
c
k
e
t
NO
Are
there any
free dVols
Has
requested size
been met
Are any size
buckets equal to or larger than
requested FS size
Are any possible
stripes equal or larger than
requested FS size
If still more than 1 best stripe
sort the best stripes
by size (ascending)
and use the smallest
Figure 6. clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools algorithm
Symmetrix system-defined storage pools algorithm
AVM works differently with Symmetrix storage systems because of the size and uniformity
of the disk volumes involved. Typically, the disk volumes exported from a Symmetrix
storage system are small and uniform in size. The aggregation strategy used by Symmetrix
storage pools is primarily to combine many small disk volumes into larger volumes that
can be used by file systems. AVM attempts to distribute the input/output (I/O) to as many
Symmetrix directories as possible. The Symmetrix storage systemcan use slicing andstriping
to distribute I/O among the physical disks on the storage system. This is less of a concern
for the AVM aggregation strategy.
ASymmetrix storage pool creates a stripe volume across one set of Symmetrix disk volumes,
or creates a metavolume, as necessary to meet the request. The stripe or metavolume is
added to the Symmetrix storage pool. When the administrator asks for a specific number
of gigabytes of space fromthe Symmetrix storage pool, the requestedsize of space is allocated
from this system-defined storage pool. AVM adds to and takes from the system-defined
46 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
storage pool as required. The stripe size is set in the system-defined profiles. You cannot
modify the stripe size of a system-definedstorage pool. The default stripe size for Symmetrix
storage pool is 256 KB. Multipath file system (MPFS) requires a stripe depth of 32 KB or
greater.
The algorithm that AVM uses looks for a set of eight disk volumes. If the set of eight is not
found, then the algorithm looks for a set of four disk volumes. If the set of four is not found,
then the algorithm looks for a set of two disk volumes. If the set of two disk volumes is not
found, then the algorithmlooks for one disk volume. AVMstripes the disk volumes together,
if the disk volumes are all of the same size. If the disk volumes are not the same size, AVM
creates a metavolume on top of the disk volumes. AVM then adds the stripe or the
metavolume to the storage pool.
If AVM cannot find any disk volumes, it looks for a metavolume in the storage pool that
has space, takes a slice from that metavolume, and makes a metavolume over that slice.
Figure 7 on page 47 shows the AVM algorithm used to select disk volumes by using a
Symmetrix system-defined storage pool.
First time
through
loop?
Take a slice from the
meta (smaller of free
space avail or FS
request)
Make meta
on slice
Error. Unable
to fill FS
request
1
CNS-000777
No
Received FS
request
Yes
Yes No
No
Yes
Stripe the
LUNs together,
or build a
meta on top
of the LUNs
Is there a
meta in the pool
with space
remaining?
Is there a set
of 8/4/2/1 disk
volumes?
1
No
Done Yes
Disk space
requirement
satisfied?
Concentrate
new volume to
end of
"in progress"
meta
Build FS
on meta
Figure 7. Symmetrix storage pool algorithm
System-defined storage pool volume and storage profiles 47
Concepts
Figure 8 on page 48 shows the structure of a Symmetrix storage pool.
CNS-000784
stripe_volume1 stripe_volume2
d3 d4 d5
d6
3
d7
3
d8
3
d9
3
d10
3
Symmetrix
storage pool
Symmetrix
STD disk
volumes
Figure 8. Symmetrix storage pool structure
All this system-defined storage pool activity is transparent to users and provides an easy
way to create and manage file systems. The system-defined storage pools do not allowusers
to have much control over howAVMaggregates storage to meet file systemneeds, but most
users prefer ease-of-use over control.
When users make a request for a newfile systemthat uses the system-defined storage pools,
AVM does the following:
1. Determines if more volumes need to be added to the storage pool. If so, selects and adds
volumes.
2. Selects an existing, available storage pool volume to use for the file system. The volume
might also be sliced to obtain the correct size for the file system request. If the request is
larger than the largest volume, AVMconcatenates the volumes to create the size required
to meet the request.
3. Places a metavolume on the resulting volume and builds the file system within the
metavolume.
4. Returns the file system information to the user.
All system-defined storage pools have specific, predictable rules for getting disk volumes
into storage pools, provided by their associated profiles.
VNX for block primary pool-based file system algorithm
AVM uses the primary pool-based algorithm as follows:
1. Striping is tried first. If disk volumes cannot be striped, then concatenation is tried.
2. AVM checks for free disk volumes:
If there are no free disk volumes and the slice option is set to no, there is not enough
space available and the request fails.
For file systemextension, AVMalways tries to expand onto the existing volumes of the file system.
However, if there is not enough space to fulfill the size request on the existing volumes, additional
storage is obtained using the above algorithmand AVMattempts to match the data service policies
of the first used disk volume of the file system.
All volumes mentioned above, whether a stripe or a concatenation, are sliced by default.
Figure 9 on page 49 shows the VNX for block primary pool-based file system algorithm.
User requests
file system on a
mapped pool
Are
there any free
dVols?
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
No
No
No
No
No
No
Sort all dVols into thin
and thick buckets.
Then sort thin and
thick buckets into
size groups.
Is count < 2 ?
Is slice option
specified as Y?
Not enough space to
fulfill request. Fail the
request, and clean
up pool if necessary.
Is there
enough free space in
pool to meet size
requirement?
Are both the
SP-balanced condition
and the matching data
service condition
relaxed?
Can AVM find target
number of dVols that match
the SP-balanced and data
service matching
requirements?
Is
size request
met?
Done
Search thick buckets first
for target number of dVols
to stripe (default=5). If not
found, then search thin size
buckets for match.
Look first for dVols
that have matching
data services and
are SP balanced.
Try to find target number
of dVols again that
match the SP-balanced
and data service
requirements.
Reset target count to 5
and relax data service
matching requirement.
Then relax SP-balanced
requirement.
CNS-001913
Use the secondary
pool-based LUNs
strategy.
Reduce target
count by 1.
Done
Done
Done
Figure 9. VNX for block primary pool-based file systems
System-defined storage pool volume and storage profiles 49
Concepts
VNX for block secondary pool-based file system algorithm
AVM uses the secondary pool-based algorithm as follows:
1. Concatenation will be used. Striping will not be used.
2. Unless requested, slicing will not be used.
3. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes.
4. AVM checks for free disk volumes:
If there are no free disk volumes and the slice option is set to no, there is not enough
space available and the request fails.
If there are no free disk volumes and the slice option is set to no, there is not enough
space available and the request fails.
Create the storage pool and add the volumes you want to use manually before creating
the file system.
VNXfor file volume management concepts (metavolumes, slice volumes, stripe volumes,
and disk volumes) and the nas_volume, nas_server, nas_slice, and nas_disk commands
RAID technology
VNX VG2
VNX VG8
The gateway system stores data on VNX for block user LUNs or Symmetrix hypervolumes.
If the user LUNs or hypervolumes are not configured correctly on the array, AVM and the
Unisphere for File software cannot be used to manage the storage.
Typically, an EMC Customer Support Representative does the initial setup of disk volumes
on these gateway storage systems.
However, if your VNX gateway system is attached to a VNX for block storage system and
you want to add disk volumes to the configuration, use the procedures that follow:
1. Use the Unisphere for Block software or the VNX for block CLI to create VNX for block
user LUNs.
2. Use either the Unisphere for File software or the VNX for file CLI to make the new user
LUNs available to the VNX for file as disk volumes.
The user LUNs must be created before you create file systems.
To add user LUNs, you must be familiar with the following:
Process of creating RAID groups and user LUNs for the VNX for file volumes.
The documentation for Unisphere for Block and VNX for block CLI describes how to create
RAID groups and user LUNs.
If the disk volumes are configured by EMC experts, go to Create file systems with AVM on
page 70.
66 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Provide storage from a VNX or legacy CLARiiON system to a gateway
system
1. Create RAID groups and LUNs (as needed for VNX for file volumes) by using the
Unisphere software or VNX for block CLI:
Always create the user LUNs in balanced pairs, one owned by SP A and one owned
by SP B. The paired LUNs must be the same size.
FC or SAS disks must be configured as RAID 1/0, RAID 5, or RAID 6. The paired
LUNs do not need to be in the same RAID group but should be of the same RAID
type. RAID groups and storage characteristics on page 33 lists the valid RAID group
and storage array combinations. Gateway models use the same combinations as the
NS-80 (for CX3 storage systems) or the NS-960 (for CX4 storage systems).
SATA disks must be configured as RAID 1/0, RAID 5, or RAID 6. All LUNs in a RAID
groupmust belong to the same SP. Create pairs by using LUNs fromtwo RAIDgroups.
RAID groups and storage characteristics on page 33 lists the valid RAID group and
storage array combinations. Gateway models use the same combinations as the NS-80
(for CX3 storage systems) or the NS-960 (for CX4 storage systems).
The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs.
Use these settings when creating RAID group user LUNs:
RAID Type: RAID 1/0, RAID 5, or RAID 6 for FC or SAS disks and RAID 1/0, RAID
5, or RAID 6 for SATA disks
Alignment Offset: 0
Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup -create -gname <groupname>
3. Ensure that you add the LUNs to the gateway system's storage group. Set the HLU to
16 or greater.
Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup -addhlu -gname ~filestorage -hlu
<HLU number> -alu <LUN number>
4. Perform one of these steps to make the new user LUNs available to the VNX for file:
Using the VNX for file CLI, type the following command:
nas_diskmark -mark -all
Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This might
cause data loss or unavailability.
Create pool-based provisioning for file storage systems
1. Create storage pools and LUNs as needed for VNX for file volumes.
Use these settings when creating user LUNs for use with mapped pools:
Alignment Offset: 0
Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup -addhlu -gname ~filestorage -hlu
<HLU number> -alu <LUN number>
3. Use one of these methods to make the new user LUNs available to the VNX for file:
Using the VNX for file CLI, type the following command:
nas_diskmark -mark -all
Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This might
cause data loss or unavailability.
Configure disk volumes 69
Configuring
Add disk volumes to an integrated system
Configure unused or new disk devices on a VNX for block storage system by using the Disk
Provisioning Wizard for File. This wizardis available only for integratedVNXfor file models
(NX4 and NS non-gateway systems excluding NS80), including Fibre Channel-enabled
models, attached to a single VNX for block storage system.
Note: For VNX systems, Advanced Data Service Policy features such as FAST and compression are
supported on pool-based LUNs only. They are not supported on RAID-based LUNs.
To open the Disk Provisioning Wizard for File in the Unisphere software:
1. Select Storage Storage Configuration Storage Pools.
2. From the task list, select Wizards Disk Provisioning Wizard for File.
Note: To use the Disk Provisioning Wizard for File, you must log in to Unisphere by using the global
sysadmin user account or by using a user account which has privileges to manage storage.
An alternative to the Disk Provisioning Wizard for File is available by using the VNX for
file CLI at /nas/sbin/setup_clariion. This alternative is not available for unified VNXsystems.
The script performs the following actions:
Provisions the disks on integrated (non-Performance) VNX for block storage systems
when there are unbound disks to configure. This script binds the data LUNs on the xPEs
and DAEs, and makes them accessible to the Data Movers.
Ensures that your RAID groups and LUN settings are appropriate for your VNX for file
server configuration.
The Unisphere for File software supports only the array templates for legacy EMCCLARiiON
CX
and CX3 storage systems. CX4 and VNX systems must use the User_Defined mode
with the /nas/sbin/setup_clariion CLI script.
The setup_clariion script allows you to configure VNX for block storage systems on a
shelf-by-shelf basis by using predefined configuration templates. For each enclosure (xPE
or DAE), the script examines your specific hardware configuration and gives you a choice
of appropriate templates. You can mix combinations of RAID configurations on the same
storage system. The script then combines the shelf templates into a custom, User_Defined
array template for each VNX for block system, and then configures your array.
Create file systems with AVM
This section describes the procedures to create a file system by using AVM storage pools,
and also explains how to create file systems by using the automatic file system extension
feature.
70 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
You can enable automatic file system extension on new or existing file systems if the file
system has an associated AVM storage pool. When you enable automatic file system
extension, use the nas_fs command options to adjust the HWM value, set a maximum file
size to which the file system can be extended, and enable thin provisioning. Create file
systems with automatic file system extension on page 81 provides more information.
You can create file systems by using storage pools with automatic file system extension
enabled or disabled. Specify the storage system from which to allocate space for the type of
storage pool that is being created.
Choose any of these procedures to create file systems:
VNX for block storage systems display as a prefix of alphabetic characters before a set
of integers, for example, FCNTR074200038-0019.
A user-defined storage pool can be created either by using manual volume management
or by letting AVMcreate the storage pool with a specified size. If you use manual volume
management, you must first stripe the volumes together and add the resulting volumes
to the storage pool you create. Managing Volumes and File Systems for VNX Manually
describes the steps to create and manage volumes.
74 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
You cannot use disk volumes you have reserved for other purposes. For example, you
cannot use any disk volumes reserved for a system-defined storage pool. Controlling
Access to System Objects on VNX contains more information on access control levels.
When creating a user-defined storage pool to reserve disk volumes froma VNXfor block
storage system, use disk volumes that are storage-processor balanced and use the same
qualities. Otherwise, AVM cannot find matching pairs, and the number of usable disk
volumes might be more limited than was intended.
To create a file system with a user-defined storage pool:
Create file systems with the automatic file system extension option enabled on page 82
Create file systems with AVM 75
Configuring
Create a user-defined storage pool by volumes
To create a user-defined storage pool (from which space for the file system is allocated) by
volumes, add volumes to the storage pool and define the storage pool attributes.
Action
To create a user-defined storage pool by volumes, use this command syntax:
$ nas_pool -create -name <name> -acl <acl> -description <desc> -volumes
<volume_name>[,<volume_name>,...] -default_slice_flag {y|n}
where:
<name> = name of the storage pool.
<acl> = designates an access control level for the new storage pool. Default value is 0.
<desc> = assigns a comment to the storage pool. Type the comment within quotes.
<volume_name> = names of the volumes to add to the storage pool. Can be a metavolume, slice volume, stripe volume,
or disk volume. Use a comma to separate each volume name.
-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed from the
storage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannot be sliced,
and volumes specified cannot be built on a slice.
Example:
To create a user-defined storage pool named marketing with a description, with the disk members d126, d127, d128, and
d129 specified, and allow the volumes to be built on a slice, type:
$ nas_pool -create -name marketing -description "storage pool for marketing" -volumes
d126,d127,d128,d129 -default_slice_flag y
Output
id = 5
name = marketing
description = storage pool for marketing
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = CLSTD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Create a user-defined storage pool by size
To create a user-defined storage pool (from which space for the file system is allocated) by
size, specify a template pool, size of the pool, minimum stripe size, and number of stripe
members.
76 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Action
To create a user-defined storage pool by size, use this command syntax:
$ nas_pool -create -name <name> -acl <acl> -description <desc>
-default_slice_flag {y|n} -size <integer>[M|G|T] -storage <system_name>
-template <system_pool_name> -num_stripe_members <num_stripe_mem>
-stripe_size <num>
where:
<name> = name of the storage pool.
<acl> = designates an access control level for the new storage pool. Default value is 0.
<desc> = assigns a comment to the storage pool. Type the comment within quotes.
-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed
from the storage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannot
be sliced, and volumes specified cannot be built on a slice.
<integer> = size of the storage pool, an integer between 1 and 1024. Specify the size in GB (default) by typing <integer>G
(for example, 250G), in MB by typing <integer>M (for example, 500M), or in TB by typing <integer>T (for example, 1T).
<system_name> = storage system on which one or more volumes will be created and added to the storage pool.
<system_pool_name> = system pool template used to create the user pool. Required when the -size option is specified.
The user pool will be created by using the profile attributes of the specified system pool template.
<num_stripe_mem> = number of stripe members used to create the user pool. Works only when both the -size and
-template options are also specified. It overrides the number of stripe members attribute of the specified system pool
template.
<num> = stripe size used to create the user pool. Works only when both the -size and -template options are also specified.
It overrides the stripe size attribute of the specified system pool template.
Example:
To create a 20 GB user-defined storage pool that is named marketing with a description by using the clar_r5_performance
pool, and that contains 4 stripe members with a stripe size of 32768 KB, and allow the volumes to be built on a slice, type:
$ nas_pool -create -name marketing -description "storage pool for marketing"
-default_slice_flag y -size 20G -template clar_r5_performance -num_stripe_members
4 -stripe_size 32768
Create file systems with AVM 77
Configuring
Output
id = 5
name = marketing
description = storage pool for marketing
acl = 0
in_use = False
clients =
members = v213
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = CLSTD
server_visibility = server_2,server_3
template_pool = clar_r5_performance
num_stripe_members = 4
stripe_size = 32768
Create the file system
To create a file system, youmust first create a user-definedstorage pool. Create a user-defined
storage pool by volumes on page 76 and Create a user-defined storage pool by size on page
76 provide more information.
Use this procedure to create a file system by specifying a user-defined storage pool and an
associated storage system:
1. List the storage system by typing:
$ nas_storage -list
Output:
id acl name serial number
1 0 APM00033900125 APM00033900125
2. Get detailed information of a specific attached storage system in the list by using this
command syntax:
$ nas_storage -info <system_name>
where:
<system_name> = name of the storage system
Example:
To get detailed information of the storage system APM00033900125, type:
$ nas_storage -info APM00033900125
Output:
78 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
id = 1
arrayname = APM00033900125
name = APM00033900125
model_type = RACKMOUNT
model_num = 630
db_sync_time = 1073427660 == Sat Jan 6 17:21:00 EST 2007
num_disks = 30
num_devs = 21
num_pdevs = 1
num_storage_grps = 0
num_raid_grps = 10
cache_page_size = 8
wr_cache_mirror = True
low_watermark = 70
high_watermark = 90
unassigned_cache = 0
failed_over = False
captive_storage = True
Active Software
Navisphere = 6.6.0.1.43
ManagementServer = 6.6.0.1.43
Base = 02.06.630.4.001
Storage Processors
SP Identifier = A
signature = 926432
microcode_version = 2.06.630.4.001
serial_num = LKE00033500756
prom_rev = 3.00.00
agent_rev = 6.6.0 (1.43)
phys_memory = 3968
sys_buffer = 749
read_cache = 32
write_cache = 3072
free_memory = 115
raid3_mem_size = 0
failed_over = False
hidden = True
network_name = spa
ip_address = 128.221.252.200
subnet_mask = 255.255.255.0
gateway_address = 128.221.252.100
num_disk_volumes = 11 - root_disk root_ldisk d3 d4 d5 d6 d8
d13 d14 d15 d16
SP Identifier = B
signature = 926493
microcode_version = 2.06.630.4.001
serial_num = LKE00033500508
prom_rev = 3.00.00
agent_rev = 6.6.0 (1.43)
phys_memory = 3968
raid3_mem_size = 0
failed_over = False
hidden = True
network_name = OEM-XOO25IL9VL9
ip_address = 128.221.252.201
subnet_mask = 255.255.255.0
gateway_address = 128.221.252.100
num_disk_volumes = 4 - disk7 d9 d11 d12
Create file systems with AVM 79
Configuring
Note: This is not a complete output.
3. Create the file systemfroma user-defined storage pool and designate the storage system
on which you want the file system to reside by using this command syntax:
$ nas_fs -name <fs_name> -type <type> -create <volume_name> pool=<pool>
storage=<system_name>
where:
<fs_name> = name of the file system
<type> = type of file system, such as uxfs (default), mgfs, or rawfs
<volume_name> = name of the volume
<pool> = name of the storage pool
<system_name> = name of the storage system on which the file system resides
Example:
To create the file system ufs1 from a user-defined storage pool and designate the
APM00033900125 storage system on which you want the file system ufs1 to reside, type:
$ nas_fs -name ufs1 -type uxfs -create MTV1 pool=marketing storage=APM00033900125
Output:
id = 2
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = MTV1
pool = marketing
member_of = root_avm_fs_group_2
rw_servers =
ro_servers =
rw_vdms =
ro_vdms =
auto_ext = no,thin=no
deduplication= off
stor_devs = APM00033900125-0111
disks = d6,d8,d11,d12
80 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Create file systems with automatic file system extension
Use the -auto_extendoption of the nas_fs commandto enable automatic file systemextension
on a new file system created with AVM. The option is disabled by default.
Note: Automatic file system extension does not alleviate the need for appropriate planning. Create
the file systems with adequate space to accommodate the estimated usage. Allocating too little space
to accommodate normal file system usage makes the Control Station rapidly and repeatedly attempt
to extendthe file system. If the Control Stationcannot adequately extendthe file systemto accommodate
the usage quickly enough, the automatic extension fails.
If automatic file system extension is disabled and the file system reaches 90 percent full, a
warning message is written to the sys_log. Any action necessary is at the administrators
discretion.
Note: You do not need to set the maximum size for a newly created file system when you enable
automatic extension. The default maximum size is 16 TB. With automatic extension enabled, even if
the HWM is not set, the file system automatically extends up to 16 TB, if the storage space is available
in the storage pool.
Use this procedure to create a file system by specifying a system-defined storage pool and
a storage system, and enable automatic file system extension.
Action
To create a file system with automatic file system extension enabled, use this command syntax:
$ nas_fs -name <fs_name> -type <type> -create size=<size> pool=<pool>
storage=<system_name> -auto_extend {no|yes}
where:
<fs_name> = name of the file system.
<type> = type of file system.
<size> = amount of space to add to the file system. Specify the size in GB by typing <number>G (for example, 250G),
in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool from which to allocate space to the file system.
<system_name> = name of the storage system associated with the storage pool.
Example:
To enable automatic file system extension on a new 10 GB file system created by specifying a system-defined storage
pool and a VNX for block storage system, type:
$ nas_fs -name ufs1 -type uxfs -create size=10G pool=clar_r5_performance
storage=APM00042000814 -auto_extend yes
Create file systems with AVM 81
Configuring
Output
id = 434
name = ufs1
acl = 0
in_use = False
type = uxfs
worm = off
volume = v1634
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers =
ro_servers =
rw_vdms =
ro_vdms =
auto_ext = hwm=90%,thin=no
deduplication= off
stor_devs = APM00042000814-001D,APM00042000814-001A,
APM00042000814-0019,APM00042000814-0016
disks = d20,d12,d18,d10
Create file systems with the automatic file system extension option
enabled
When you create a file system with automatic extension enabled, you can set the point at
which the file system automatically extends (the HWM) and the maximum size to which it
can grow. You can also enable thin provisioning at the same time that you create or extend
a file system. Enable automatic file system extension and options on page 91 provides
information on modifying the automatic file system extension options.
If you set the slice=no option on the file system, the actual file system size might become
bigger than the size specified for the file system, which would exceed the maximum size.
In this case, you receive a warning, and the automatic extension fails. If you do not specify
the file systemslice option (-option slice=yes|no) when you create the file system, it defaults
to the setting of the storage pool. Modify system-defined and user-defined storage pool
attributes on page 109 provides more information.
Note: If the actual file system size is above the HWM when thin provisioning is enabled, the client
sees the actual file system size instead of the specified maximum size.
Enabling automatic file system extension and thin provisioning options does not
automatically reserve the space from the storage pool for that file system. So that the
automatic extension can succeed, administrators must ensure that adequate storage space
exists. If the available storage is less than the maximum size setting, automatic extension
fails. Users receive an error message when the file system becomes full, even though it
appears that there is free space in the file system. The file systemmust be manually extended.
Use this procedure to simultaneously set the automatic file system extension options when
you are creating the file system:
82 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
1. Create a file system of a specified size, enable automatic file system extension and thin
provisioning, and set the HWM and the maximum file system size simultaneously by
using this command syntax:
$ nas_fs -name <fs_name> -type <type> -create size=<integer>[T|G|M]
pool=<pool> storage=<system_name> -auto_extend {no|yes} -thin {yes|no}
-hwm <50-99>% -max_size <integer>[T|G|M]
where:
<fs_name> = name of the file system.
<type> = type of file system.
<integer> = size requested in MB, GB, or TB. The maximum size is 16 TB.
<pool> = name of the storage pool.
<system_name> = attachedstorage systemon which the file systemand storage pool reside.
<50-99> = percentage between 50 and 99, at which you want the file system to
automatically extend.
Example:
To create a 10 MB file system of type UxFS from an AVM storage pool, with automatic
extension enabled, and a maximum file system size of 200 MB, HWM of 90 percent, and
thin provisioning enabled, type:
$ nas_fs -name ufs2 -type uxfs -create size=10M pool=clar_r5_performance
-auto_extend yes -thin yes -hwm 90% -max_size 200M
Output:
id = 27
name = ufs2
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=90%,max_size=200M,thin=yes
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
Note: When you enable thin provisioning on a new or existing file system, you must also specify
the maximum size to which the file system can automatically extend.
Create file systems with AVM 83
Configuring
2. Verify the settings for the specific file systemafter enabling automatic extension by using
this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To verify the settings for file system ufs2 after enabling automatic extension, type:
$ nas_fs -info ufs2
Output:
id = 27
name = ufs2
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers = server_2
ro_servers =
rw_vdms =
ro_vdms =
backups = ufs2_snap1,ufs2_snap2
auto_ext = hwm=90%,max_size=200M,thin=yes
deduplication= off
thin_storage = True
tiering_policy= Auto-tier
compressed = False
mirrored = False
ckpts =
stor_devs = APM00042000814-001D,APM00042000814-001A,
APM00042000814-0019,APM00042000814-0016
disks = d20,d12,d18,d10
Youcanalso set the options -hwmand-max_size oneachfile systemwithautomatic extension
enabled. When enabling thin provisioning on a file system, you must set the maximumsize,
but setting the high water mark is optional.
Extend file systems with AVM
Increase the size of a file systemnearing its maximumcapacity by extending the file system.
You can:
Extend the size of a file system to add space if it has an associated system-defined or
user-definedstorage pool. You can also specify the storage systemfromwhich to allocate
space. Extend file systems by using storage pools on page 85 provides instructions.
84 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Extend the size of a file system by adding volumes if the file system has an associated
system-defined or user-defined storage pool. Extend file systems by adding volumes to
a storage pool on page 87 provides instructions.
Extend the size of a file system by using a storage pool other than the one used to create
the file system. Extend file systems by using a different storage pool on page 89 provides
instructions.
Extend an existing file systemby enabling automatic extension on that file system. Enable
automatic file system extension and options on page 91 provides instructions.
Extend an existing file system by enabling thin provisioning on that file system. Enable
thin provisioning on page 96 provides instructions.
Managing Volumes and File Systems on VNX Manually contains the instructions to extend file
systems manually.
Extend file systems by using storage pools
All file systems created by using the AVM feature have an associated storage pool.
Extend a file system created with either a system-defined storage pool or a user-defined
storage pool by specifying the size and the name of the file system. AVM allocates storage
from the storage pool to the file system. You can also specify the storage system you want
to use. If you do not specify, the last storage system associated with the storage pool is used.
Note: A file system created by using a mapped storage pool can be extended on its existing pool or by
using a compatible mapped storage pool that contains the same disk type.
Use this procedure to extend a file system by size:
1. Check the file system configuration to confirm that the file system has an associated
storage pool by using this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Note: If you see a storage pool defined in the output, the file system was created with AVM and
has an associated storage pool.
Example:
To check the file system configuration to confirm that file system ufs1 has an associated
storage pool, type:
$ nas_fs -info ufs1
Output:
Extend file systems with AVM 85
Configuring
id = 27
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = FP1
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
2. Extend the size of the file system by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool> storage=<system_name>
where:
<fs_name> = name of the file system.
<size> = amount of space to add to the file system. Specify the size in GB by typing
<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or
in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool.
<system_name> = name of the storage system. If you do not specify a storage system, the
default storage systemis the one on which the file systemresides. If the file systemspans
multiple storage systems, the default is any one of the storage systems on which the file
system resides.
Note: The first time you extend the file systemwithout specifying a storage pool, the default storage
pool is the one used to create the file system. If you specify a storage pool that is different from
the one used to create the file system, the next time you extend this file system without specifying
a storage pool, the last pool in the output list is the default.
Example:
To extend the size of file system ufs1 by 10 MB, type:
$ nas_fs -xtend ufs1 size=10M pool=clar_r5_performance storage=APM00023700165
Output:
86 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
id = 8
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs = APM00023700165-0111
disks = d7,d13,d19,d25,d30,d31,d32,d33
3. Check the size of the file system after extending it to confirm that the size increased by
using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of file system ufs1 after extending it to confirm that the size increased,
type:
$ nas_fs -size ufs1
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Extend file systems by adding volumes to a storage pool
You can extend a file system manually by specifying the volumes to add.
Note: With user-defined storage pools, you can manually create the underlying volumes, including
striping, before adding the volume to the storage pool. Managing Volumes and File Systems on VNX
Manually describes the procedures needed to perform these tasks before creating or extending the file
system.
If you do not specify a storage system when extending the file system, the default storage
system is the one on which the file system resides. If the file system spans multiple storage
systems, the default is any one of the storage systems on which the file system resides.
Use this procedure to extend the file system by adding volumes to the same user-defined
storage pool that was used to create the file system:
1. Check the configuration of the file system to confirm the associated user-defined storage
pool by using this command syntax:
Extend file systems with AVM 87
Configuring
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the configuration of file system ufs3 to confirm the associated user-defined
storage pool, type:
$ nas_fs -info ufs3
Output:
id = 27
name = ufs3
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = marketing
member_of =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
Note: The user-defined storage pool used to create the file system is defined in the output as
pool=marketing.
2. Add volumes to extend the size of a file system by using this command syntax:
$ nas_fs -xtend <fs_name> <volume_name> pool=<pool> storage=<system_name>
where:
<fs_name> = name of the file system.
<volume_name> = name of the volume to add to the file system.
<pool> = storage pool associated with the file system. It can be user-defined or
system-defined.
<system_name> = name of the storage system on which the file system resides.
Example:
To extend file system ufs3, type:
$ nas_fs -xtend ufs3 v121 pool=marketing storage=APM00023700165
Output:
88 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
id = 10
name = ufs3
acl = 0
in_use = False
type = uxfs
volume = v121
pool = marketing
member_of =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs = APM00023700165-0111
disks = d7,d8,d13,d14
Note: The next time you extend this file system without specifying a storage pool, the last pool in
the output list is the default.
3. Check the size of the file system after extending it to confirm that the size increased by
using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of file system ufs3 after extending it to confirm that the size increased,
type:
$ nas_fs -size ufs3
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Extend file systems by using a different storage pool
You can use more than one storage pool to extend a file system. Ensure that the storage
pools have space allocated from the same storage system to prevent the file system from
spanning more than one storage system.
Note: A file system created by using a mapped storage pool can be extended on its existing pool or by
using a compatible mapped storage pool that contains the same disk type.
Use this procedure to extend the file system by using a storage pool other than the one used
to create the file system:
1. Check the file system configuration to confirm that it has an associated storage pool by
using this command syntax:
Extend file systems with AVM 89
Configuring
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the file system configuration to confirm that file system ufs2 has an associated
storage pool, type:
$ nas_fs -info ufs2
Output:
id = 9
name = ufs2
acl = 0
in_use = True
type = uxfs
worm = off
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
Note: The storage pool used earlier to create or extend the file system is shown in the output as
associated with this file system.
2. Optionally, extend the file system by using a storage pool other than the one used to
create the file system by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool>
where:
<fs_name> = name of the file system.
<size> = amount of space to add to the file system. Specify the size in GB by typing
<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or
in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool.
Example:
To extend file system ufs2 by using a storage pool other than the one used to create the
file system, type:
$ nas_fs -xtend ufs2 size=10M pool=clar_r5_economy
90 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Output:
id = 9
name = ufs2
acl = 0
in_use = False
type = uxfs
volume = v123
pool = clar_r5_performance,clar_r5_economy
member_of = root_avm_fs_group_3,root_avm_fs_group_4
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs = APM00033900165-0112
disks = d7,d13,d19,d25
Note: The storage pools used to create and extend the file system now appear in the output. There
is only one storage system from which space for these storage pools is allocated.
3. Check the file system size after extending it to confirm the increase in size by using this
command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of file system ufs2 after extending it to confirm the increase in size,
type:
$ nas_fs -size ufs2
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Enable automatic file system extension and options
You can automatically extend an existing file system created with AVM system-defined or
user-defined storage pools. The file system automatically extends by using space from the
storage system and storage pool with which the file system is associated.
If you set the slice=no option on the file system, the actual file system size might become
bigger than the size specified for the file system, which would exceed the maximum size.
In this case, you receive a warning, and the automatic extension fails. If you do not specify
the file systemslice option (-option slice=yes|no) when you create the file system, it defaults
to the setting of the storage pool.
Modify system-defined and user-defined storage pool attributes on page 109 describes the
procedure to modify the default_slice_flag attribute on the storage pool.
Extend file systems with AVM 91
Configuring
Use the -modify option to enable automatic extension on an existing file system. You can
also set the HWM and maximum size.
To enable automatic file system extension and options:
The total used and unused space in the storage pool in megabytes (total_mb).
The total space available fromall sources in megabytes that could be added to the storage
pool (potential_mb). For user-defined storage pools, the output for potential_mb is 0
because they must be manually extended and shrunk.
Note: If either nonMB-aligned disk volumes or disk volumes of different sizes are striped together,
truncation of storage might occur. The total amount of space added to a pool might be different than
the total amount taken from potential storage. Total space in the pool includes the truncated space,
but potential storage does not include the truncated space.
In the Unisphere for File software, the potential megabytes in the output represents the total
available storage, including the storage pool. In the VNX for file CLI, the output for
potential_mb does not include the space in the storage pool.
Note: Use the -size -all option to display the size information for all storage pools.
Action
To display the size information for a specific storage pool, use this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size information for the clar_r5_performance storage pool, type:
$ nas_pool -size clar_r5_performance
Output
id = 3
name = clar_r5_performance
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
106 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Action
To display the size information for a specific mapped storage pool, use this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size information for the Pool0 storage pool, type:
$ nas_pool -size Pool0
Output
id = 43
name = Pool0
used_mb = 0
avail_mb = 0
total_mb = 0
potential_mb = 3691
Physical storage usage in Pool Pool0 on APM00101902363
used_mb = 16385
avail_mb = 1632355
total_mb = 1648740
Display storage pool size information 107
Managing
Display size information for Symmetrix storage pools
Use the -size -all option to display the size information for all storage pools.
Action
To display the size information of Symmetrix storage pools, use this command syntax:
$ nas_pool -size <name> -slice y
where:
<name> = name of the storage pool
Example:
To request size information for the Symmetrix symm_std storage pool, type:
$ nas_pool -size symm_std -slice y
Output
id = 5
name = symm_std
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
Note
Use the -slice y option to include any space from sliced volumes in the available result. However, if the default_slice_flag
value is set to no, then sliced volumes do not appear in the output.
The size information for the system-defined storage pool named clar_r5_performance appears in the output. If you
have more storage pools, the output shows the size information for all the storage pools.
avail_mb is the amount of unused available space in the storage pool in megabytes.
total_mb is the total of used and unused space in the storage pool in megabytes.
potential_mb is the potential amount of storage that can be added to the storage pool available from all sources in
megabytes. For user-defined storage pools, the output for potential_mb is 0 because they must be manually extended
and shrunk. In this example, total_mb and potential_mb are the same because the total storage in the storage pool
is equal to the total potential storage available.
If either nonmegabyte-aligned disk volumes or disk volumes of different sizes are striped together, truncation of
storage might occur. The total amount of space added to a pool might be different than the total amount taken from
potential storage. Total space in the pool includes the truncated space, but potential storage does not include the
truncated space.
108 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Modify system-defined and user-defined storage pool attributes
System-definedanduser-definedstorage pools have attributes that control howthey manage
the volumes and file systems. Table 7 on page 36 lists the modifiable storage pool attributes,
and their values and descriptions.
You can change the attribute default_slice_flag for system-defined and user-defined storage
pools. The flag indicates whether member volumes can be sliced. If the storage pool has
member volumes built on one or more slices, you cannot set this value to n.
Action
To modify the default_slice_flag for a system-defined or user-defined storage pool, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -default_slice_flag {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To modify a storage pool named marketing and change the default_slice_flag to prevent members of the pool from being
sliced when space is dispensed, type:
$ nas_pool -modify marketing -default_slice_flag n
Output
id = 5
name = marketing
description = storage pool for marketing
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag= False
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members= N/A
stripe_size = N/A
Note
-is_greedy: If this is set to y (greedy), the system-defined storage pool attempts to create
new member volumes before using space from existing member volumes. If this is set
to n (not greedy), the system-defined storage pool consumes all the existing space in the
storage pool before trying to add additional member volumes.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough
free space on the existing volumes that the file system is using. Table 7 on page 36 describes the
is_greedy behavior.
The tasks to modify the attributes of a system-defined storage pool are:
-name: Changes the name of the specified user-defined storage pool to the new name.
-acl: Designates an access control level for a user-defined storage pool. The default value
is 0.
-description: Changes the description comment for the user-defined storage pool.
-is_greedy: Identifies which member volumes of a user-defined storage pool are used to
provide space when creating or extending a file system.
The tasks to modify the attributes of a user-defined storage pool are:
Uses the disk selection algorithms that AVM uses to create system-defined storage pool
members.
Prevents system-defined storage pools from rapidly consuming a large number of disk
volumes.
You can specify the storage system from which to allocate space to the pool. The dynamic
behavior of the system-defined storage pool must be turned off by using the nas_pool
-modify command before extending the pool.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough free
space on the existing volumes that the file system is using. Table 7 on page 36 describes the is_greedy
behavior.
On successful completion, the system-defined storage pool extends by at least the specified
size. The storage pool might extend more than the requested size. The behavior is the same
as when the storage pool is extended during a file-system creation.
If a storage system is not specified and the pool has members from a single storage system,
then the default is the existing storage system. If a storage system is not specified and the
pool has members from multiple storage systems, the existing set of storage systems is used
to extend the storage pool.
If a storage system is specified, space is allocated from that system:
The specified pool must have the is_dynamic attribute set to n, or false. Modify
system-defined storage pool attributes on page 110 provides instructions to change the
attribute.
Online Support, locate the applicable Support by Product page, and search for the
error message.
EMC Training and Professional Services
EMCCustomer Educationcourses helpyoulearn howEMCstorage products work together
within your environment to maximize your entire infrastructure investment. EMCCustomer
Education features online and hands-on training in state-of-the-art labs conveniently located
throughout the world. EMCcustomer training courses are developed and delivered by EMC
experts. Go to the EMC Online Support website at http://Support.EMC.com for course and
registration information.
EMC Professional Services can help you implement your system efficiently. Consultants
evaluate your business, IT processes, and technology, and recommend ways that you can
leverage your information for the most benefit. From business plan to implementation, you
get the experience and expertise that you need without straining your IT staff or hiring and
training new personnel. Contact your EMC Customer Support Representative for more
information.
128 Managing Volumes and File Systems on VNX AVM 7.0
Troubleshooting
Glossary
A
automatic file system extension
Configurable file system feature that automatically extends a file system created or extended
with AVM when the high water mark (HWM) is reached.
See also high water mark.
Automatic Volume Management (AVM)
Feature of VNXfor file that creates andmanages volumes automatically without manual volume
management by an administrator. AVM organizes volumes into storage pools that can be
allocated to file systems.
See also thin provisioning.
D
disk volume
On a VNXfor file, a physical storage unit as exportedfromthe storage system. All other volume
types are created from disk volumes.
See also metavolume, slice volume, stripe volume, and volume.
F
File migration service
Feature for migrating file systems from NFS and CIFS source file servers to the VNX for file.
The online migration is transparent to users once it starts.
file system
Method of cataloging and managing the files and directories on a system.
Fully Automated Storage Tiering (FAST)
Lets you assign different categories of data to different types of storage media within a tiered
pool. Data categories may be based on performance requirements, frequency of use, cost, and
other considerations. The FAST feature retains the most frequently accessed or important data
Managing Volumes and File Systems on VNX AVM 7.0 129
on fast, high performance (more expensive) drives, and moves the less frequently accessed and
less important data to less-expensive (lower-performance) drives.
H
high water mark (HWM)
Trigger point at which the VNXfor file performs one or more actions, such as sending a warning
message, extending a volume, or updating a file system, as directed by the related feature's
software/parameter settings.
L
logical unit number (LUN)
Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is the
last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the
term is often used to refer to the logical unit itself.
M
mapped pool
A storage pool that is dynamically created during the normal storage discovery (diskmark)
process for use on the VNX for file. It is a one-to-one mapping with either a VNX storage pool
or a FAST Symmetrix Storage Group. A mapped pool can contain a mix of different types of
LUNs that use any combination of data services (thin, thick, auto-tiering, mirrored, and VNX
compression). However, mapped pools should contain only the same type of LUNs that use
the same data services (all thick, all thin, all the same auto-tiering options, all mirrored or none
mirrored, and all compressed or none compressed) for the best file system performance.
metavolume
On VNX for file, a concatenation of volumes, which can consist of disk, slice, or stripe volumes.
Also called a hypervolume or hyper. Every file system must be created on top of a unique
metavolume.
See also disk volume, slice volume, stripe volume, and volume.
S
slice volume
On VNX for file, a logical piece or specified area of a volume used to create smaller, more
manageable units of storage.
See also disk volume, metavolume, stripe volume, and volume.
storage pool
Groups of available disk volumes organized by AVM that are used to allocate available storage
to file systems. They can be created automatically by AVM or manually by the user.
See also Automatic volume management (AVM)
130 Managing Volumes and File Systems on VNX AVM 7.0
Glossary
stripe volume
Arrangement of volumes that appear as a single volume. Allows for stripe units that cut across
the volume and are addressed in an interlaced manner. Stripe volumes make load balancing
possible.
See also disk volume, metavolume, and slice volume.
system-defined storage pool
Predefined AVM storage pools that are set up to help you easily manage both storage volume
structures and file system provisioning by using AVM.
T
thin provisioning
Configurable VNX for file feature that lets you allocate storage based on long-term projections,
while you dedicate only the file system resources that you currently need. NFS or CIFS clients
and applications see the virtual maximum size of the file system of which only a portion is
physically allocated.
See also Automatic Volume Management.
U
Universal Extended File System (UxFS)
High-performance, VNXfor file default file system, basedon traditional Berkeley UFS, enhanced
with 64-bit support, metadata logging for high availability, and several performance
enhancements.
user-defined storage pools
User-created storage pools containing volumes that are manually added. User-defined storage
pools provide an appropriate option for users who want control over their storage volume
structures while still using the automated file system provisioning functionality of AVM to
provision file systems from the user-defined storage pools.
V
volume
On VNX for file, a virtual disk into which a file system, database management system, or other
application places data. A volume can be a single disk partition or multiple partitions on one
or more physical drives.
See also disk volume, metavolume, slice volume, and stripe volume.
Managing Volumes and File Systems on VNX AVM 7.0 131
Glossary
132 Managing Volumes and File Systems on VNX AVM 7.0
Glossary
Index
A
algorithm
automatic file system extension 58
Symmetrix 47
system-defined storage pools 39
VNX for block 42
attributes
storage pool, modify 109, 110, 113
storage pools 36
system-defined storage pools 110
user-defined storage pools 113
automatic file system extension
algorithm 58
and VNX Replicator interoperability
considerations 59
considerations 63
enabling 70
how it works 27
maximum size option 81
maximum size, set 95
options 26
restrictions 12
thin provisioning 96
Automatic Volume Management (AVM)
restrictions 11
storage pool 27
C
cautions 14, 15
spanning storage systems 14
character support, international 15
checkpoint, create for file system 100
clar_r1 storage pool 31
clar_r5_economy storage pool 31
clar_r5_performance storage pool 31
clar_r6 storage pool 31
clarata_archive storage pool 31
clarata_r10 storage pool 32
clarata_r3 storage pool 32
clarata_r6 storage pool 32
clarefd_r10 storage pool 32
clarefd_r5 storage pool 32
clarsas_archive storage pool 32
clarsas_r10 storage pool 32
clarsas_r6 storage pool 32
cm_r1 storage pool 32
cm_r5_economy storage pool 32
cm_r5_performance storage pool 32
cm_r6 storage pool 32
cmata_archive storage pool 32
cmata_r10 storage pool 33
cmata_r3 storage pool 33
cmata_r6 storage pool 33
cmefd_r10 storage pool 33
cmefd_r5 storage pool 33
cmsas_archive storage pool 33
cmsas_r10 storage pool 33
cmsas_r6 storage pool 33
considerations
automatic file system extension 63
interoperability 59
create a file system 70, 72, 74
using system-defined pools 72
using user-defined pools 74
D
data service policy
removing from storage group 15
delete user-defined storage pools 123
details, display 105
display
details 105
Managing Volumes and File Systems on VNX AVM 7.0 133
display (continued)
size information 106
E
EMC E-Lab Navigator 126
error messages 127
extend file systems
by size 85
by volume 87
with different storage pool 89
extend storage pools
system-defined by size 121
user-defined by size 119
user-defined by volume 118
F
FAST capacity algorithm and striping 16
file system
create checkpoint 100
extend by size 85
extend by volume 87
quotas 15
file system considerations 63
I
international character support 15
K
known problems and limitations 126
L
legacy CLARiiON and deleting thin items 15
M
masking option and moving LUNs 16
messages, error 127
migrating LUNs 16
modify system-defined storage pools 110
P
planning considerations 59
profiles, volume and storage 39
Q
quotas for file system 15
R
RAID group combinations 34
related information 22
restrictions 11, 12, 13, 14, 15
automatic file system extension 12
AVM 11
Symmetrix volumes 11
thin provisioning 13
TimeFinder/FS 15
VNX for block 14
S
storage pools
attributes 48
clar_r1 31
clar_r5_economy 31
clar_r5_performance 31
clar_r6 31
clarata_archive 31
clarata_r10 32
clarata_r3 32
clarata_r6 32
clarefd_r10 32
clarefd_r5 32
clarsas_archive 32
clarsas_r10 32
clarsas_r6 32
cm_r1 32
cm_r5_economy 32
cm_r5_performance 32
cm_r6 32
cmata_archive 32
cmata_r10 33
cmata_r3 33
cmata_r6 33
cmefd_r10 33
cmefd_r5 33
cmsas_archive 33
cmsas_r10 33
cmsas_r6 33
delete user-defined 123
display details 105
display size information 106
explanation 27
extend system-defined by size 121
134 Managing Volumes and File Systems on VNX AVM 7.0
Index
storage pools (continued)
extend user-defined by size 119
extend user-defined by volume 118
list 104
modify attributes 109
remove volumes from user-defined 122
supported types 30
symm_ata 31
symm_ata_rdf_src 31
symm_ata_rdf_tgt 31
symm_efd 31
symm_std 31
symm_std_rdf_src 31
symm_std_rdf_tgt 31
system-defined algorithms 39
system-defined Symmetrix 47
system-defined VNX for block 40
symm_ata storage pool 31
symm_ata_rdf_src storage pool 31
symm_ata_rdf_tgt storage pool 31
symm_efd storage pool 31
symm_std storage pool 31
symm_std_rdf_src storage pool 31
symm_std_rdf_tgt storage pool 31
Symmetrix and deleting thin items 15
Symmetrix pool, insufficient space 17
system-defined storage pools 39, 72, 85, 87, 110
algorithms 39
system-defined storage pools (continued)
create a file system with 72
extend file systems by size 85
extend file systems by volume 87
T
thin provisioning, out of space message 16
troubleshooting 125
U
Unicode characters 15
upgrade software 63
user-defined storage pools 74, 85, 87, 113, 122
create a file system with 74
extend file systems by size 85
extend file systems by volume 87
modify attributes 113
remove volumes 122
V
VNX for block pool, insufficient space 17
VNX upgrade
automatic file system extension issue 15
Managing Volumes and File Systems on VNX AVM 7.0 135
Index
136 Managing Volumes and File Systems on VNX AVM 7.0
Index