You are on page 1of 136

EMC

VNX

Series
Release 7.0
Managing Volumes and File Systems with VNX

AVM
P/N 300-011-806
REV A02
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Copyright 1998 - 2011 EMC Corporation. All rights reserved.
Published September 2011
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
For the most up-to-date regulatory document for your product line, go to the Technical
Documentation and Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.
Corporate Headquarters: Hopkinton, MA 01748-9103
2 Managing Volumes and File Systems on VNX AVM 7.0
Contents
Preface.....................................................................................................7
Chapter 1: Introduction...........................................................................9
Overview................................................................................................................10
System requirements.............................................................................................10
Restrictions.............................................................................................................11
AVM restrictions..........................................................................................11
Automatic file system extension restrictions...........................................12
Thin provisioning restrictions...................................................................13
VNX for block system restrictions............................................................14
Cautions..................................................................................................................14
User interface choices...........................................................................................17
Related information..............................................................................................22
Chapter 2: Concepts.............................................................................23
AVM overview.......................................................................................................24
System-defined storage pools overview............................................................24
Mapped storage pools overview.........................................................................25
User-defined storage pools overview.................................................................26
File system and automatic file system extension overview............................26
AVM storage pool and disk type options..........................................................27
AVM storage pools .....................................................................................27
Disk types.....................................................................................................27
System-defined storage pools....................................................................30
RAID groups and storage characteristics................................................33
User-defined storage pools .......................................................................35
Storage pool attributes..........................................................................................35
Managing Volumes and File Systems on VNX AVM 7.0 3
System-defined storage pool volume and storage profiles.............................39
VNX for block system-defined storage pool algorithms.......................40
VNXfor block system-definedstorage pools for RAID5, RAID3,
and RAID 1/0 SATA support................................................................43
VNX for block system-defined storage pools for Flash support..........45
Symmetrix system-defined storage pools algorithm.............................46
VNX for block primary pool-based file system algorithm....................48
VNX for block secondary pool-based file system algorithm................50
Symmetrix mapped pool file systems......................................................51
File system and storage pool relationship.........................................................53
Automatic file system extension.........................................................................55
Thin provisioning..................................................................................................59
Planning considerations.......................................................................................59
Chapter 3: Configuring.........................................................................65
Configure disk volumes.......................................................................................66
Provide storage from a VNX or legacy CLARiiON system to a
gateway system......................................................................................67
Create pool-based provisioning for file storage systems.......................68
Add disk volumes to an integrated system.............................................70
Create file systems with AVM.............................................................................70
Create file systems with system-defined storage pools.........................72
Create file systems with user-defined storage pools..............................74
Create the file system..................................................................................78
Create file systems with automatic file system extension.....................81
Create file systems with the automatic file system extension
option enabled........................................................................................82
Extend file systems with AVM............................................................................84
Extend file systems by using storage pools.............................................85
Extend file systems by adding volumes to a storage pool....................87
Extend file systems by using a different storage pool...........................89
Enable automatic file system extension and options.............................91
Enable thin provisioning............................................................................96
Enable automatic extension, thin provisioning, and all options
simultaneously.......................................................................................98
Create file system checkpoints with AVM.......................................................100
Chapter 4: Managing..........................................................................103
List existing storage pools..................................................................................104
4 Managing Volumes and File Systems on VNX AVM 7.0
Contents
Display storage pool details...............................................................................105
Display storage pool size information.............................................................106
Display size information for Symmetrix storage pools.......................108
Modify system-defined and user-defined storage pool attributes...............109
Modify system-defined storage pool attributes....................................110
Modify user-defined storage pool attributes.........................................113
Extend a user-defined storage pool by volume..............................................118
Extend a user-defined storage pool by size.....................................................119
Extend a system-defined storage pool.............................................................120
Extend a system-defined storage pool by size......................................121
Remove volumes from storage pools...............................................................122
Delete user-defined storage pools.....................................................................123
Delete a user-defined storage pool and its volumes............................124
Chapter 5: Troubleshooting................................................................125
AVM troubleshooting considerations...............................................................126
EMC E-Lab Interoperability Navigator............................................................126
Known problems and limitations.....................................................................126
Error messages.....................................................................................................127
EMC Training and Professional Services.........................................................128
Glossary................................................................................................129
Index.....................................................................................................133
Managing Volumes and File Systems on VNX AVM 7.0 5
Contents
6 Managing Volumes and File Systems on VNX AVM 7.0
Contents
Preface
As part of an effort to improve and enhance the performance and capabilities of its product lines,
EMCperiodically releases revisions of its hardware and software. Therefore, some functions described
in this document may not be supported by all versions of the software or hardware currently in use.
For the most up-to-date information on product features, refer to your product release notes.
If a product does not function properly or does not function as described in this document, please
contact your EMC representative.
Managing Volumes and File Systems on VNX AVM 7.0 7
Special notice conventions
EMC uses the following conventions for special notices:
Note: Emphasizes content that is of exceptional importance or interest but does not relate to personal
injury or business/data loss.
Identifies content that warns of potential business or data loss.
Indicates a hazardous situation which, if not avoided, could result in minor or
moderate injury.
Indicates a hazardous situation which, if not avoided, could result in death or
serious injury.
Indicates a hazardous situation which, if not avoided, will result in death or serious
injury.
Where to get help
EMC support, product, and licensing information can be obtained as follows:
Product information For documentation, release notes, software updates, or for
informationabout EMCproducts, licensing, andservice, go to the EMCOnline Support
website (registration required) at http://Support.EMC.com.
Troubleshooting Go to the EMC Online Support website. After logging in, locate
the applicable Support by Product page.
Technical support For technical support and service requests, go to EMCCustomer
Service on the EMC Online Support website. After logging in, locate the applicable
Support by Product page, and choose either Live Chat or Create a service request. To
open a service request through EMC Online Support, you must have a valid support
agreement. Contact your EMCsales representative for details about obtaining a valid
support agreement or with questions about your account.
Note: Do not request a specific support representative unless one has already been assigned to
your particular system problem.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications.
Please send your opinion of this document to:
techpubcomments@EMC.com
8 Managing Volumes and File Systems on VNX AVM 7.0
Preface
1
Introduction
Topics included are:

Overview on page 10

System requirements on page 10

Restrictions on page 11

Cautions on page 14

User interface choices on page 17

Related information on page 22


Managing Volumes and File Systems on VNX AVM 7.0 9
Overview
Automatic Volume Management (AVM) is an EMC

VNX

feature that automates volume


creation and management. By using the VNX command options and interfaces that support
AVM, system administrators can create and extend file systems without creating and
managing the underlying volumes.
The automatic file system extension feature automatically extends file systems created with
AVMwhen the file systems reach their specifiedhigh water mark (HWM). Thin provisioning
works with automatic file system extension and allows the file system to grow on demand.
With thin provisioning, the space presented to the user or application is the maximum size
setting, while only a portion of that space is actually allocated to the file system.
This document is part of the VNX documentation set and is intended for use by system
administrators responsible for creating and managing volumes and file systems by using
AVM.
System requirements
Table 1 on page 10 describes the EMC VNX series software, hardware, network, and storage
configurations.
Table 1. System requirements
VNX series version 7.0 Software
No specific hardware requirements Hardware
No specific network requirements Network
Any VNX-qualified storage system Storage
10 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
Restrictions
The restrictions listed in this section are applicable to AVM, automatic file systemextension,
the thin provisioning feature, and the EMC VNX for block system.
AVM restrictions
The restrictions applicable to AVM are as follows:
Create a file system by using only one storage pool. If you need to extend a file system,
extend it by using either the same storage pool or by using another compatible storage
pool. Do not extend a file system across storage systems unless it is absolutely necessary.
File systems might reside on multiple disk volumes. Ensure that all disk volumes used
by a file system reside on the same storage system for file system creation and extension.
This is to protect against storage system and data unavailability.
RAID 3 is supported only with EMC VNX Capacity disk volumes.
When building volumes on a VNX for file attached to an EMC Symmetrix

storage
system, use regular Symmetrix volumes (also called hypervolumes), not Symmetrix
metavolumes.
Use AVMto create the primary EMCTimeFinder

/FS (NearCopy or FarCopy) file system,


if the storage pool attributes indicate that no slicedvolumes are usedin that storage pool.
AVM does not support business continuance volumes (BCVs) in a storage pool with
other disk types.
AVM storage pools must contain only one disk type. Disk types cannot be mixed. Table
4 on page 28 provides a complete list of disk types. Table 5 on page 31 provides a list
of storage pools and the description of the associated disk types.
LUNs that have been added to the file-based storage group are discovered during the
normal storage discovery (diskmark) and mapped to their corresponding storage pools
on the VNX for file. If a pool is encountered with the same name as an existing
user-definedpool or system-definedpool fromthe same VNXfor block system, diskmark
will fail. It is possible to have duplicate pool names on different VNX for block systems,
but not on the same VNX for block system.
Names of pools mapped from a VNX for block system to a VNX for file cannot be
modified.
A user cannot manually delete a mapped pool. Mapped storage pools overview on page
25 provides a description of a mapped storage pool.
For VNX for file, a storage pool cannot contain both mirrored and non-mirrored LUNs.
If diskmark discovers both mirrored and non-mirrored LUNs, diskmark will fail. Also,
data may be unavailable or lost during failovers.
Restrictions 11
Introduction
The VNX for file control volumes (LUNs 0 through 5) must be thick devices and use the
same data service policies. Otherwise, the NAS software installation will fail.
Automatic file system extension restrictions
The restrictions applicable to automatic file system extension are as follows:
Automatic file systemextension does not work on a Migration File System(MGFS), which
is the EMCfile systemtype usedwhile performing data migration fromeither a Common
Internet File System (CIFS) or network file system (NFS) to the VNX system by using
VNX File System Migration (also known as CDMS).
Automatic extension is not supported on file systems created with manual volume
management. You can enable automatic file system extension on the file system only if
it is created or extended by using an AVM storage pool.
Automatic extensionis not supportedon file systems usedwithTimeFinder/FS NearCopy
or FarCopy.
While automatic file system extension is running, the Control Station blocks all other
commands that apply to this file system. When the extension is complete, the Control
Station allows the commands to run.
The Control Station must be running and operating properly for automatic file system
extension, or any other VNX feature, to work correctly.
Automatic extension cannot be used for any file system that is part of a remote data
facility (RDF) configuration. Do not use the nas_fs command with the -auto_extend
option for file systems associated with RDF configurations. Doing so generates the error
message: Error 4121: operation not supported for file systems of type EMC SRDF

.
The options associated with automatic extension can be modified only on file systems
mounted with read/write permission. If the file system is mounted read-only, you must
remount the file system as read/write before modifying the automatic file system
extension, HWM, or maximum size options.
Enabling automatic file system extension and thin provisioning does not automatically
reserve the space from the storage pool for that file system. Administrators must ensure
that adequate storage space exists, so that the automatic extension operation can succeed.
When there is not enough storage space available to extendthe file systemto the requested
size, the file system extends to use all the available storage.
For example, if automatic extension requires 6 GB but only 3 GB are available, the file
system automatically extends to 3 GB. Although the file system was partially extended,
an error message appears to indicate that there was not enough storage space available
to perform automatic extension. When there is no available storage, automatic extension
fails. You must manually extend the file system to recover from this issue.
Automatic file systemextensionis supportedwithEMCVNXReplicator. Enable automatic
extension only on the source file system in a replication scenario. The destination file
12 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
systemsynchronizes with the source file systemandextends automatically. Do not enable
automatic extension on the destination file system.
When using automatic extension and thin provisioning, you can create replicated copies
of extendible file systems, but to do so, use slice volumes (slice=y).
Youcannot create iSCSI thick LUNs onfile systems that have automatic extensionenabled.
You cannot enable automatic extension on a file system if there is a storage mode iSCSI
LUN present on the file system. You will receive an error, "Error 2216: <fs_name>: item
is currently in use by iSCSI." However, iSCSI virtually provisioned LUNs are supported
on file systems with automatic extension enabled.
Automatic extension is not supported on the root file system of a Data Mover or on the
root file system of a Virtual Data Mover (VDM).
Thin provisioning restrictions
The restrictions applicable to the thin provisioning feature are as follows:
VNX for file supports thin provisioning on Symmetrix DMX

-4 and legacy CLARiiON

CX4

and CX5 disk volumes.


The options associated with thin provisioning can be modified only on file systems
mounted with read/write permission. If the file system is mounted read-only, you must
remount the file system as read/write before modifying the thin provisioning, HWM, or
maximum size options.
Do not use VNX for file thin provisioned objects (iSCSI LUNs or iSCSI file systems) with
Symmetrix or VNX for block thin provisioned devices. A single file system should not
span virtual and regular Symmetrix or VNX for block volumes. Use only one layer of
thin provisioning, either on the Symmetrix or VNX for block storage system, or on the
VNX for file, but not on both. If the user attempts to create VNX for file thin provisioned
objects with Symmetrix or VNX for block thin provisioned devices, the Data Mover
generates an error similar to the following: "VNX for File thin provisioning and VNX for
Block or Symmetrix thin provisioning cannot coexist on a file system."
Thin provisioning is supported with VNX Replicator. Enable thin provisioning only on
the source file system in a replication scenario. The destination file system synchronizes
with the source file system and extends automatically. Do not enable thin provisioning
on the destination file system.
When using automatic file system extension and thin provisioning, you can create
replicated copies of extendible file systems, but to do so, use slice volumes (slice=y).
With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size of the
VNXReplicator destinationfile systemwhile they see the virtually provisionedmaximum
size of the source file system. Interoperability considerations on page 59 provides more
information on using automatic file system extension with VNX Replicator.
Restrictions 13
Introduction
Thin provisioning is supported on the primary file system, but not supported with
primary file system checkpoints. NFS, CIFS, and FTP clients cannot see the virtually
provisioned maximum size of any EMC SnapSure

checkpoint file system.


If a file system is created by using a virtual storage pool, the -thin option of the nas_fs
command cannot be enabled. VNX for file thin provisioning and VNX for block thin
provisioning cannot coexist on a file system.
VNX for block system restrictions
The restrictions applicable to VNX for block systems are as follows:
Use RAIDgroup-basedLUNs insteadof pool-basedLUNs to create systemcontrol LUNs.
Pool-based LUNs can be created as thin LUNs or converted to thin LUNs at any time. A
thin control LUN could run out of space and lead to a Data Mover panic.
VNX for block mapped pools support only RAID 5, RAID 6, and RAID 1/0:
RAID5 is the default RAIDtype, with a minimumof three drives (2+1). Use multiples
of five drives.
RAID 6 has a minimum of four drives (2+2). Use multiples of eight drives.
RAID 1/0 has a minimum of two drives (1+1).
EMC Unisphere

is required to provision virtual devices (thin and thick LUNs) on the


VNX for block system. Any platforms that do not provide Unisphere access cannot use
this feature.
You cannot mix mirrored and non-mirrored LUNs in the same VNX for block system
pool. You must separate mirrored and non-mirrored LUNs into different storage pools
on VNXfor block systems. If diskmark discovers both mirroredand non-mirroredLUNs,
diskmark will fail.
Cautions
If any of this information is unclear, contact your EMC Customer Support Representative
for assistance:
Do not span a file system (including checkpoint file systems) across multiple storage
systems. All parts of a file system must use the same disk volume type and be stored on
a single storage system. Spanning more than one storage system increases the chance of
data loss, data unavailability, or both. One storage system could fail while the other
continues, andthus make failover difficult. In this case, the targets might not be consistent.
In addition, a spannedfile systemis subject to any performance andfeature set differences
between storage systems.
14 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
If you plan to set quotas on a file system to control the amount of space that users and
groups can consume, turn on quotas immediately after creating the file system. Using
Quotas on VNXcontains instructions onturning onquotas andgeneral quotas information.
If your user environment requires international character support (that is, support of
non-English character sets or Unicode characters), configure the VNX system to support
this feature before creating file systems. Using International Character Sets with VNX
contains instructions to support and configure international character support on a VNX
system.
If you plan to create TimeFinder/FS (local, NearCopy, or FarCopy) snapshots, do not use
slice volumes (nas_slice) when creating the production file system (PFS). Instead, use
the full portion of the disk presented to the VNX system. Using slice volumes for a PFS
slated as the source for snapshots wastes storage space and can result in loss of PFS data.
Automatic file system extension is interrupted during VNX system software upgrades.
If automatic extension is enabled, the Control Station continues to capture the HWM
events, but the actual file system extension does not start until the VNX system upgrade
process completes.
Closely monitor VNX for block pool space that contains pool LUNs to ensure that there
is enough space available. Use the nas_pool -size <AVM pool name> command and look
for the physical usage information. An alert is generated when a VNX for block pool
reaches the user-defined threshold level.
Deleting a thin file system or a thin disk volume does not release any space on a system.
To release the space in a thin pool on the Symmetrix storage system, unbind the LUN
by using the symconfigure command.
To release the space in a thin pool on either a VNX or a legacy CLARiiON system,
unbind the LUN by using the nas_disk -delete -perm -unbind command.
Before removing a data service policy from a Fully Automated Storage Tiering (FAST)
Symmetrix Storage Group that is already mapped to a VNX for file storage pool and is
in use with multiple tiers, to prevent an error from occurring on the VNX for file, you
must do one of the following:
Configure a single tier policy with the disk type wanted and allow the FAST engine
to move the disks. Once the disks are moved to the same tier, remove the data service
policy from the Symmetrix Storage Group and run diskmark.
Use the Symmetrix nondisruptive LUN migration utility to ensure that every file
system is built on top of a single type of disk.
Migrate data through NFS or CIFS by using either VNX Replicator, the CLI nas_copy
command, file system migration, or a third-party vendor's migration software.
The Flash BCV (BCVE), R1EFD, R2EFD, R1BCVE, or R2BCVE standalone disk types are
not supported on a VNX for file. However, a VNX for file supports using a FAST policy
that contains a Flash tier as long as the FAST policy contains multiple tiers. When you
need to remove a FAST policy that contains a Flash tier from the VNX for file Storage
Group, an error will occur if the Flash technology is used in BCV, R1, or R2 devices. The
Cautions 15
Introduction
nas_diskmark -mark -all operation cannot set disk types of BCVE, R1EFD, R2EFD,
R1BCVE, or R2BCVE. To prevent an error from occurring, do one of the following:
Configure a single tier policy by using either FC or ATA disks, and allow the FAST
engine to move the Flash disks to the selected type.
Use the Symmetrix nondisruptive LUNmigration utility to ensure that the file system
is built on top of a single type of disk, either FC or SATA.
VNX thin provisioning allows you to specify a value above the maximum supported
storage capacity for the system. If an alert message indicates that you are running out of
space, or if you reach the system's storage capacity limits and have virtually provisioned
resources that are not fully allocated, you may need to do one of the following:
Delete unnecessary data.
Enable VNX File Deduplication and Compression to try to reduce file system storage
usage.
Migrate data to a different system that has space.
Closely monitor Symmetrix pool space that contains pool LUNs to ensure that there is
enough space available. Use the command /usr/symcli/bin/symcfg list -pool -thin -all to
display pool usage.
If the masking option is being used, moving LUNs between Symmetrix Storage Groups
can cause file systemdisruption. If the LUNs needto be movedfrequently between FAST
Storage Groups for various performance requests, you can create separate FAST Storage
Groups and Masking Storage Groups to avoid disruptions. A single LUN can belong to
both a FAST Storage Group and a Masking Storage Group.
The Symmetrix FAST capacity algorithm does not consider striping on the file system
side. The algorithm may mix different technologies in the same striping volume, which
can affect performance until the performance algorithm optimizes it. The initial
configuration of the striping volumes is very important to ensure that the performance
is maximized even before the initial data move is completed by the FAST engine. For
example, a FAST policy contains 50 percent Performance disk volumes and 50 percent
Capacity disk volumes, and the storage group has 16 disk volumes. The initial
configuration should be 1 striping meta volume with 8 Performance disk volumes and
1 striping meta volume with 8 Capacity disk volumes, instead of 4 Performance disk
volumes and 4 Capacity disk volumes in the same striping meta volume. The same point
needs to be considered when the FAST policy is changed or devices are added to or
removed from the FAST storage group. AVM will try to use the same technology in the
striping meta volume.
If you are using Symmetrix or legacy CLARiiON systems, and you need to migrate a
LUN that is in a VNX for file storage group, the size of the target LUN must be the same
size as the source LUN or data unavailability and data loss may occur. For better
performance and improved space usage, ensure that the target LUN is a newly-created
LUN with no existing data.
Insufficient space on a Symmetrix pool that contains pool LUNs might result in a Data
Mover panic and data unavailability. To avoid this situation, pre-allocate 100 percent of
16 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
the TDEV when binding it to the pool. If you do not allocate 100 percent, there is the
possibility of overallocation. Closely monitor the pool usage.
Insufficient space on a VNX for block system pool that contains thin LUNs might result
in a Data Mover panic and data unavailability. You cannot pre-allocate space on a VNX
for file storage pool. Closely monitor the pool usage to avoid running out of space.
You can use FAST thin LUNs to configure the SnapSure checkpoint SavVol. However,
insufficient space in a storage pool might result in a Data Mover panic or data
unavailability. Closely monitor the pool usage to avoid running out of space.
User interface choices
The system offers flexibility in managing networked storage that is based on your support
environment and interface preferences. This document describes how to use AVM by using
the command line interface (CLI). You can also perform many of these tasks by using one
of the system's management applications:
EMC Unisphere software
Celerra Monitor
Microsoft Management Console (MMC) snap-ins
Active Directory Users and Computers (ADUC) extensions
The Unisphere software online help contains additional information about managing your
system.
Installing Management Applications on VNX for File includes instructions on launching the
Unisphere software, and on installing the MMC snap-ins and the ADUC extensions.
The VNX Release Notes contain additional, late-breaking information about system
management applications.
Table 2 on page 17 identifies the storage pool tasks that you can perform in each interface,
and the command syntax or the path to the Unisphere software page to use to perform the
task. Unless otherwise noted in the task, the operations apply to user-defined and
system-defined storage pools. The VNX Command Line Interface Reference for File contains
more information on the commands described in Table 2 on page 17.
Table 2. Storage pool tasks supported by user interface
Unisphere software Control Station CLI Task
Select Storage Storage
Configuration Storage Pools for File,
and click Create.
nas_pool -create -name
<name> -volumes <volumes>
Create a new user-defined storage
pool by volumes.
Note: This task applies to user-
defined storage pools only.
User interface choices 17
Introduction
Table 2. Storage pool tasks supported by user interface (continued)
Unisphere software Control Station CLI Task
Select Storage Storage
Configuration Storage Pools for File,
and click Create.
nas_pool -create -name
<name> -size
<integer>[M|G|T]
-template <system_pool_name>
Create a new user-defined storage
pool by size.
Note: This task applies to user-
defined storage pools only.
-num_stripe_members <num>
-stripe_size <num>
Select Storage Storage
Configuration Storage Pools for File,
and click Create.
nas_pool -create -name
<name> -is_greedy [y|n]
Create a new user-defined storage
pool with the is_greedy attribute.
Specifying n (default) tells the sys-
tem to use space from the user-
defined storage pool's existing
member volumes in the order that
the volumes were added to the
pool to create a new file system or
extend an existing file system.
Specifying y tells the system to use
space from the least-used member
volume of the user-defined storage
pool to create a new file system.
When there is more than one least-
used member volume available,
AVM selects the member volume
that contains the most disk vol-
umes. For example, if one member
volume contains four disk volumes
and another member volume con-
tains eight disk volumes, AVM se-
lects the one with eight disk vol-
umes. If there are two or more
member volumes that have the
same number of disk volumes,
AVM selects the one with the low-
est ID.
Note: This task applies to user-
defined storage pools only.
Select Storage Storage
Configuration Storage Pools for File.
nas_pool -list
List existing storage pools.
18 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
Table 2. Storage pool tasks supported by user interface (continued)
Unisphere software Control Station CLI Task
Select Storage Storage
Configuration Storage Pools for File,
and click Properties.
nas_pool -info <name>
Note: When you perform this operation, the
total_potential_mb option does not include
the space in the storage pool in the output.
Display storage pool details.
Note: When you perform this opera-
tion, the total_potential_mb option
represents the total available storage,
including the storage pool.
Select Storage Storage
Configuration Storage Pools for File,
nas_pool -size <name>
Display storage pool size informa-
tion.
and view the Storage Capacity and
Storage Used(%) columns.
Select Storage Storage
Configuration Storage Pools for File,
nas_pool -modify
{<name>|id=<id>}
-default_slice_flag {y|n}
Specify whether AVM uses slice
volumes or entire unused disk vol-
umes from the storage pool to cre-
ate or extend a file system.
and click Properties. Select or clear
Slice Pool Volumes by Default? as re-
quired.
Select Storage Storage
Configuration Storage Pools for File,
nas_pool -modify
{<name>|id=<id>} -is_dynamic
{y|n}
Specify whether AVM extends the
storage pool automatically with
unused disk volumes whenever the
pool needs more space.
Note: This task applies to system-
defined storage pools only.
and click Properties. Select or clear
Automatic Extension Enabled as re-
quired.
Select Storage Storage
Configuration Storage Pools for File,
nas_pool -modify
{<name>|id=<id>} -is_greedy
{y|n}
Modify a system-defined storage
pool with the is_greedy attribute.
Specifying y tells AVM to allocate
new, unused disk volumes to the
system-defined storage pool when
creating or extending a file system,
even if there is available space in
the pool.
Specifying n tells AVM to allocate
all available system-defined stor-
age pool space to create or extend
a file system before adding vol-
umes to the pool.
and click Properties. Select or clear
Obtain Unused Disk Volumes as re-
quired.
User interface choices 19
Introduction
Table 2. Storage pool tasks supported by user interface (continued)
Unisphere software Control Station CLI Task
When extending a file system, the
is_greedy attribute is ignored un-
less there is not enough free space
on the existing volumes that the file
system is using. Table 7 on page
36 describes the is_greedy behav-
ior.
Note: This task applies to system-
defined storage pools only.
Select Storage Storage
Configuration Storage Pools for File,
nas_pool -modify
{<name>|id=<id>} -is_greedy
{y|n}
Modify a user-defined storage pool
with the is_greedy attribute.
Specifying n (default) tells the sys-
tem to use space from the user-
defined storage pool's existing
member volumes in the order that
the volumes were added to the
pool to create a new file system.
Specifying y tells the system to use
space from the least-used member
volume in the user-defined storage
pool to create a new file system.
When there is more than one least-
used member volume available,
AVM selects the member volume
that contains the most disk vol-
umes. For example, if one member
volume contains four disk volumes
and another member volume con-
tains eight disk volumes, AVM se-
lects the one with eight disk vol-
umes. If there are two or more
member volumes that have the
same number of disk volumes,
AVM selects the one with the low-
est ID.
When extending a file system, the
is_greedy attribute is ignored un-
less there is not enough free space
on the existing volumes that the file
system is using.
and click Properties. Select or clear
Obtain Unused Disk Volumes as re-
quired.
20 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
Table 2. Storage pool tasks supported by user interface (continued)
Unisphere software Control Station CLI Task
Table 7 on page 36 describes the
is_greedy behavior.
Note: This task applies to user-
defined storage pools only.
Select Storage Storage
Configuration Storage Pools for File.
nas_pool -xtend
{<name>|id=<id>}
-volumes <volume_name>
Add volumes to a user-defined
storage pool.
Note: This task applies to user-
defined storage pools only.
Select the storage pool that you want
to extend, and click Extend. Select one
or more volumes to add to the pool.
[,<volume_name>,...]
Select Storage Storage
Configuration Storage Pools for File.
nas_pool -xtend
{<name>|id=<id>}
-size <integer> [M|G|T]
Extend a storage pool by size and
specify a storage system from
which to allocate storage.
Note: This task applies to system-
defined storage pools only when
the is_dynamic attribute for the
storage pool is set to n.
Select the storage pool that you want
to extend, and click Extend. Select the
Storage System to be used to extend
the file system, and type the size re-
quested in MB, GB, or TB.
Note: The drop-down list shows all
the available storage systems. The
volumes shown are only those created
on the storage system that is highlight-
ed.
-storage <system_name>
Select Storage Storage
Configuration Storage Pools for File.
nas_pool -shrink
{<name>|id=<id>}
-volumes <volume_name>
Remove volumes from a storage
pool.
Select the storage pool that you want
to shrink, and click Shrink. Select one
[,<volume_name>,...] [-deep]
or more volumes that are not in use
to be removed from the pool.
The -deep setting is optional, and is used to
recursively remove all members.
Select Storage Storage
Configuration Storage Pools for File.
nas_pool -delete
{<name>|id=<id>} [-deep]
Delete a storage pool.
Note: This task applies to user-
defined storage pools only.
Select the storage pool that you want
to delete, and click Delete.
The -deep setting is optional, and is used to
recursively remove all members.
User interface choices 21
Introduction
Table 2. Storage pool tasks supported by user interface (continued)
Unisphere software Control Station CLI Task
Select Storage Storage
Configuration Storage Pools for File,
nas_pool -modify
{<name>|id=<id>} -name <name>
Change the name of a storage
pool.
Note: This task applies to user-
defined storage pools only.
and click Properties. Type the new
name in the Name text box.
Select Storage Storage
Configuration Storage Pools for File,
$ nas_fs -name <name>
-type <type> -create
pool=<pool>
Create a file system with automatic
file system extension enabled.
and click Create. Select Automatic
Extension Enabled.
storage=<system_name>
{size=<integer>[T|G|M]}
-auto_extend {no|yes}
Related information
Specific information related to the features and functionality described in this guide are
included in:
VNX Command Line Interface Reference for File
Parameters Guide for VNX for File
Configuring NDMP Backups to Disk on VNX
Controlling Access to System Objects on VNX
Managing Volumes and File Systems for VNX Manually
Online VNX man pages
EMC VNX documentation on the EMC Online Support website
The complete set of EMC VNX series customer publications is available on the EMC
Online Support website. To search for technical documentation, go to
http://Support.EMC.com. After logging in to the website, click the VNX Support by
Product page to locate information for the specific feature required.
VNX wizards
Unisphere software provides wizards for performing setup and configuration tasks. The
Unisphere online help provides more details on the wizards.
22 Managing Volumes and File Systems on VNX AVM 7.0
Introduction
2
Concepts
Topics included are:

AVM overview on page 24

System-defined storage pools overview on page 24

Mapped storage pools overview on page 25

User-defined storage pools overview on page 26

File system and automatic file system extension overview on page


26

AVM storage pool and disk type options on page 27

Storage pool attributes on page 35

System-defined storage pool volume and storage profiles on page


39

File system and storage pool relationship on page 53

Automatic file system extension on page 55

Thin provisioning on page 59

Planning considerations on page 59


Managing Volumes and File Systems on VNX AVM 7.0 23
AVM overview
The AVM feature automatically creates and manages file system storage. AVM is
storage-system independent and supports existing requirements for automatic storage
allocation (SnapSure, SRDF, and IP replication).
You can configure file systems created with AVM to automatically extend. The automatic
extension feature enables you to configure a file system so that it extends automatically,
without system administrator intervention, to support file system operations. Automatic
extension causes the file system to extend when it reaches the specified usage point, the
HWM, as described in Automatic file system extension on page 55. You set the size for the
file system you create, and also the maximum size to which you want the file system to
grow. The thin provisioning option lets you present the maximum size of the file system to
the user or application, of which only a portion is actually allocated. Thin provisioning
allows the file system to slowly grow on demand as the data is written.
Note: Enabling the thin provisioning option with automatic extension does not automatically reserve
the space fromthe storage pool for that file system. Administrators must ensure that adequate storage
space exists so that the automatic extension operation can succeed. If the available storage is less than
the maximum size setting, then automatic extension fails. Users receive an error message when the
file system becomes full, even though it appears that there is free storage space in the file system.
File systems support the following FAST data service policies:

For VNX for block systems: thin LUNs and thick LUNs, compression, auto-tiering, and
mirroring (EMC MirrorView

or RecoverPoint).

For Symmetrix systems: thin LUNs and thick LUNs, auto-tiering, and R1, R2, or BCV
disk volumes.
To create file systems, use one or more types of AVM storage pools:

System-defined storage pools

User-defined storage pools


System-defined storage pools overview
System-definedstorage pools are predefinedandavailable with the VNXsystem. Youcannot
create or delete these predefined storage pools. You can modify some of the attributes of
the system-defined storage pools, but this is unnecessary.
AVM system-defined storage pools do not preclude the use of user-defined storage pools
or manual volume and file system management, but instead give system administrators a
simple volume and file system management tool. With command options and interfaces
that support AVM, you can use system-defined storage pools to create and extend file
systems without manually creating and managing stripe volumes, slice volumes, or
metavolumes. If your applications do not require precise placement of file systems on
24 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
particular disks or on particular locations on specific disks, using AVM is an efficient way
for you to create file systems.
Flash drives behave differently than Performance or Capacity drives. AVM uses different
logic to configure file systems on Flash drives. To configure Flash drives for maximum
performance, AVM may select more disk volumes than are needed to satisfy the requested
capacity. While the individual disk volumes are no longer available for manual volume
management, the unused Flash drive space is still available for creating additional file
systems or extending existing file systems. VNX for block system-defined storage pools for
Flash support on page 45 contains additional information about using Flash drives.
AVMsystem-definedstorage pools are adequate for most high availability andperformance
considerations. Each system-defined storage pool manages the details of allocating storage
to file systems. When you create a file system by using AVM system-defined storage pools,
storage is automatically allocated from the pool to the new file system. After the storage is
allocated from that pool, the storage pool can dynamically grow and shrink to meet the file
system needs.
Mapped storage pools overview
A mapped pool is a storage pool that is dynamically created during the normal storage
discovery (diskmark) process for use on the VNX for file. It is a one-to-one mapping with
either a VNXstorage pool or a FAST Symmetrix Storage Group. Amapped pool can contain
a mix of different types of LUNs that use any combination of data services:

thin

thick

auto-tiering

mirrored

VNX compression
However, ensure that the mapped pool contains only the same type of LUNs that use the
same data services for the best file system performance:

all thick

all thin

all the same auto-tiering options

all mirrored or none mirrored

all compressed or none compressed


If a mapped pool is not in use and no LUNs exist in the file-based storage group that
corresponds to the pool, the pool will be deleted automatically during diskmark.
VNXfor block data services can be configured at the LUNlevel. When creating a file system
with mapped pools, the default slice option is set to no to help prevent inconsistent data
services on the file system.
Mapped storage pools overview 25
Concepts
User-defined storage pools overview
User-defined storage pools allow you to create containers or pools of storage, filled with
manually created volumes. When the applications require precise placement of file systems
on particular disks or locations on specific disks, use AVM user-defined storage pools for
more control. User-defined storage pools also allow you to reserve disk volumes so that the
system-defined storage pools cannot use them.
User-defined storage pools provide a better option for those who want more control over
their storage allocation while still using the more automatedmanagement tool. User-defined
storage pools are not as automated as the system-defined storage pools. You must specify
some attributes of the storage pool and the storage systemfromwhich the space is allocated
to create file systems. While somewhat less involved than creating volumes and file systems
manually, using these storage pools requires more manual involvement on your part than
the system-defined storage pools. When you create a file system by using a user-defined
storage pool, you must:
1. Create the storage pool.
2. Choose and add volumes to it either by manually selecting and building the volume
structure or by auto-selection.
3. Extend it with new volumes when required.
4. Remove volumes you no longer require in the storage pool.
Auto-selection is performed by choosing a minimumsize and a systempool which describes
the disk attributes. With auto-selection, whole disk volumes are taken from the volumes
available in the system pool and placed in the user pool according to the selected stripe
options. Auto-selection uses the same AVM algorithms that choose which disk volumes to
stripe in a system pool. System-defined storage pool volume and storage profiles on page
39 describes the AVM algorithms used.
File system and automatic file system extension overview
You can create or extend file systems with AVM storage pools and configure the file system
to automatically extend as needed. You can do one of the following:

Enable automatic extension on a file system when it is created.

Enable and disable it at any later time by modifying the file system.
The options that work with automatic file system extension are as follows:

HWM

Maximum size

Thin provisioning
The HWM and maximum size are described in Automatic file system extension on page 55.
Thin provisioning is described in Thin provisioning on page 59.
26 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
The default supported maximum size for any file system is 16 TB.
With automatic extension, the maximumsize is the size to which the file systemcould grow,
up to the supported 16 TB. Setting the maximum size is optional with automatic extension,
but mandatorywiththinprovisioning. Withthinprovisioningenabled, users andapplications
see the maximumsize, while only a portion of that size is actually allocated to the file system.
Automatic extension allows the file systemto growas neededwithout systemadministrator
intervention, and meet systemoperations requirements continuously, without interruptions.
AVM storage pool and disk type options
AVM provides a range of options for managing volumes. The VNX system can choose the
configuration and placement of the file systems by using system-defined storage pools, or
you can create a user-defined storage pool and define its attributes.
This section contains the following:

AVM storage pools on page 27

Disk types on page 27

System-defined storage pools on page 30

RAID groups and storage characteristics on page 33

User-defined storage pools on page 35


AVM storage pools
An AVM storage pool is a container or pool of volumes. Table 3 on page 27 lists the major
difference between system-defined and user-defined storage pools.
Table 3. System-defined and user-defined storage pool difference
User-defined storage pools System-defined storage pools Functionality
Manual only Administrators must
manage the volume configuration, addi-
tion, and removal of storage from these
storage pools.
Automatic, but the dynamic behavior can be dis-
abled.
Ability to grow and shrink
Chapter 4 provides more detailed information.
Disk types
A storage pool must contain volumes from only one disk type.
AVM storage pool and disk type options 27
Concepts
Table 4 on page 28 lists the available disk types associated with the storage pools and the
disk type descriptions.
Table 4. Disk types
Description Disk type
Standard VNX for block disk volumes. CLSTD
VNX for block Capacity disk volumes. CLATA
VNX for block Serial Attached SCSI (SAS) disk volumes. CLSAS
VNX for block Performance and SATA II Flash drive disk vol-
umes.
CLEFD
VNX for block Capacity disk volumes for use with EMC Mir-
rorView/Synchronous.
CMATA
Standard VNX for block disk volumes for use with MirrorView/Syn-
chronous.
CMSTD
VNX for block CLEFD disk volumes that are used with Mir-
rorView/Synchronous.
CMEFD
VNX for block SAS disk volumes that are used with Mir-
rorView/Synchronous.
CMSAS
Standard Symmetrix disk volumes, typically RAID 1 configuration. STD
Symmetrix Performance disk volumes, set up as source for
mirrored storage that uses SRDF functionality.
R1STD
Standard Symmetrix disk volume that is a mirror of another
standard Symmetrix disk volume over RDF links.
R2STD
High performance Symmetrix disk volumes built on Flash drives,
typically RAID 5 configuration.
EFD
Standard Symmetrix disk volumes built on Capacity drives, typ-
ically RAID 1 configuration.
ATA
Symmetrix Capacity disk volumes, set up as source for mirrored
storage that uses SRDF functionality.
R1ATA
Symmetrix Capacity disk volumes, set up as target for mirrored
storage that uses SRDF functionality.
R2ATA
VNX for block Performance disk volumes that correspond to
VNX for block pool-based LUNs.
Performance
28 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Table 4. Disk types (continued)
Description Disk type
VNX for block Capacity disk volumes that correspond to VNX
for block pool-based LUNs.
Capacity
VNX for block Flash disk volumes that correspond to VNX for
block pool-based LUNs.
Extreme_performance

For VNX for block, a mixture of VNX for block Performance,


Capacity, or Flash disk volumes that correspond to VNX for
block pool-based LUNs.

For Symmetrix, a mixture of Symmetrix Flash, Performance,


or Capacity disk volumes that correspond to devices in FAST
Storage Groups.
Mixed
For VNX for block, a mixture of VNX for block Performance,
Capacity, or Flash disk volumes that correspond to VNX for block
pool-based LUNs used with MirrorView/Synchronous.
Mirrored_mixed
For VNX for block, Performance disk volumes that correspond
to VNX for block pool-based LUNs used with MirrorView/Syn-
chronous.
Mirrored_performance
For VNX for block, Capacity disk volumes that correspond to
VNX for block pool-based LUNs used with MirrorView/Syn-
chronous.
Mirrored_capacity
For VNX for block, Flash disk volumes that correspond to VNX
for block pool-based LUNs used with MirrorView/Synchronous.
Mirrored_extreme_perfor-
mance
Business continuance volume (BCV) for use by TimeFinder/FS
operations.
BCV
BCV, built from Capacity disks, for use by TimeFinder/FS oper-
ations.
BCVA
BCV, built from Capacity disks, that is mirrored to a different
Symmetrix over RDF links, RAID 1 configuration, and used as
a source volume by TimeFinder/FS operations.
R1BCA
BCV, built from Capacity disks, that is a mirror of another BCV
over RDF links, and used as a target of destination volume by
TimeFinder/FS operations.
R2BCA
BCV that is mirrored to a different Symmetrix over RDF links,
RAID 1 configuration, and used as a source volume by
TimeFinder/FS operations.
R1BCV
AVM storage pool and disk type options 29
Concepts
Table 4. Disk types (continued)
Description Disk type
BCV that is a mirror of another BCV over RDF links, and used
as a target of destination volume by TimeFinder/FS operations.
R2BCV
BCV, built from a mixture of Symmetrix Flash, Performance, or
Capacity disk volumes, and used by TimeFinder/FS operations.
BCVMixed
A mixture of Symmetrix Flash, Performance, or Capacity disk
volumes, set up as source for mirrored storage that uses SRDF
functionality.
R1Mixed
Mixed BCV that is a mirror of another BCV over RDF links, and
used as a target of destination volume by TimeFinder/FS opera-
tions.
R2Mixed
Mixed BCV that is mirrored to a different Symmetrix over RDF
links, RAID 1 configuration, and used as a source volume by
TimeFinder/FS operations.
R1BCVMixed
Mixed BCV that is a mirror of another BCV over RDF links, and
used as a target of destination volume by TimeFinder/FS opera-
tions.
R2BCVMixed
System-defined storage pools
Choosing system-definedstorage pools to buildthe file systemis an efficient way to manage
volumes and file systems. They are associated with the type of attached storage system you
have. This means that:

VNX for block storage pools are available for attached VNX for block storage systems.

Symmetrix storage pools are available for attached Symmetrix storage systems.
System-defined storage pools are dynamic by default. The AVM feature adds and removes
volumes automatically from the storage pool as needed. Table 5 on page 31 lists the
system-defined storage pools supported on the VNX for file. RAID groups and storage
characteristics on page 33 contains additional information about RAID group combinations
for system-defined storage pools.
Note: A storage pool can include disk volumes of only one type.
30 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Table 5. System-defined storage pools
Description Storage pool name
Designed for high performance and availability at medium cost. This storage
pool uses STD disk volumes (typically RAID 1).
symm_std
Designed for high performance and availability at low cost. This storage
pool uses ATA disk volumes (typically RAID 1).
symm_ata
Designed for high performance and availability at medium cost, specifically
for storage that will be mirrored to a remote VNX for file that uses SRDF, or
symm_std_rdf_src
to a local VNX for file that uses TimeFinder/FS. Using SRDF/S with VNX
for Disaster Recovery and Using TimeFinder/FS, NearCopy, and FarCopy
on VNX for File provide more information about the SRDF feature.
Designed for high performance and availability at medium cost, specifically
as a mirror of a remote VNX for file that uses SRDF. This storage pool uses
symm_std_rdf_tgt
Symmetrix R2STD disk volumes. Using SRDF/S with VNX for Disaster Re-
covery provides more information about the SRDF feature.
Designed for archival performance and availability at low cost, specifically
for storage mirrored to a remote VNX for file that uses SRDF. This storage
symm_ata_rdf_src
pool uses Symmetrix R1ATA disk volumes. Using SRDF/S with VNX for
Disaster Recovery provides more information about the SRDF feature.
Designed for archival performance and availability at low cost, specifically
as a mirror of a remote VNX for file that uses SRDF. This storage pool uses
symm_ata_rdf_tgt
Symmetrix R2ATA disk volumes. Using SRDF/S with VNX for Disaster Re-
covery provides more information about the SRDF feature.
Designed for very high performance and availability at high cost. This storage
pool uses Flash disk volumes (typically RAID 5).
symm_efd
Designed for high performance and availability at low cost. This storage
pool uses CLSTD disk volumes created from RAID 1 mirrored-pair disk
groups.
clar_r1
Designed for high availability at low cost. This storage pool uses CLSTD
disk volumes created from RAID 6 disk groups.
clar_r6
Designed for medium performance and availability at low cost. This storage
pool uses CLSTD disk volumes created from 4+1 RAID 5 disk groups.
clar_r5_performance
Designed for medium performance and availability at low cost. This storage
pool uses CLSTD disk volumes created from 8+1 RAID 5 disk groups.
clar_r5_economy
Designed for use with infrequently accessed data, such as archive retrieval.
This storage pool uses CLATA disk drives in a RAID 5 configuration.
clarata_archive
AVM storage pool and disk type options 31
Concepts
Table 5. System-defined storage pools (continued)
Description Storage pool name
Designed for archival performance and availability at low cost. This AVM
storage pool uses LCFC, SATA II, and CLATA disk drives in a RAID 3 con-
figuration.
clarata_r3
Designed for high availability at low cost. This storage pool uses CLATA
disk volumes created from RAID 6 disk groups.
clarata_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CLATA disk volumes in a RAID 1/0 configuration.
clarata_r10
Designed for medium performance and availability at medium cost. This
storage pool uses VNX Serial Attached SCSI (SAS) disk volumes created
from RAID 5 disk groups.
clarsas_archive
Designed for high availability at medium cost. This storage pool uses CLSAS
disk volumes created from RAID 6 disk groups.
clarsas_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CLSAS disk volumes in a RAID 1/0 configuration.
clarsas_r10
Designed for very high performance and availability at high cost. This storage
pool uses CLEFD disk volumes created from 4+1 and 8+1 RAID 5 disk
groups.
clarefd_r5
Designed for high performance and availability at medium cost. This storage
pool uses two CLEFD disk volumes in a RAID 1/0 configuration.
clarefd_r10
Designed for high performance and availability at low cost. This storage
pool uses CMSTD disk volumes created from RAID 1 mirrored-pair disk
groups for use with MirrorView/Synchronous.
cm_r1
Designed for medium performance and availability at low cost. This storage
pool uses CMSTD disk volumes created from 4+1 RAID 5 disk groups for
use with MirrorView/Synchronous.
cm_r5_performance
Designed for medium performance and availability at low cost. This storage
pool uses CMSTD disk volumes created from 8+1 RAID 5 disk groups for
use with MirrorView/Synchronous.
cm_r5_economy
Designed for high availability at low cost. This storage pool uses CMSTD
disk volumes created from RAID 6 disk groups for use with MirrorView/Syn-
chronous.
cm_r6
Designed for use with infrequently accessed data, such as archive retrieval.
This storage pool uses CMATA disk drives in a RAID 5 configuration for use
with MirrorView/Synchronous.
cmata_archive
32 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Table 5. System-defined storage pools (continued)
Description Storage pool name
Designed for archival performance and availability at low cost. This AVM
storage pool uses CMATA disk drives in a RAID 3 configuration for use with
MirrorView/Synchronous.
cmata_r3
Designed for high availability at low cost. This storage pool uses CMATA
disk volumes created from RAID 6 disk groups for use with MirrorView/Syn-
chronous.
cmata_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CMATA disk volumes in a RAID 1/0 configuration for use with
MirrorView/Synchronous.
cmata_r10
Designed for medium performance and availability at medium cost. This
storage pool uses CMSAS disk volumes created from RAID 5 disk groups
for use with MirrorView/Synchronous.
cmsas_archive
Designed for high availability at low cost. This storage pool uses CMSAS
disk volumes created from RAID 6 disk groups for use with MirrorView/Syn-
chronous.
cmsas_r6
Designed for high performance and availability at medium cost. This storage
pool uses two CMSAS disk volumes in a RAID 1/0 configuration for use with
MirrorView/Synchronous.
cmsas_r10
Designed for very high performance and availability at high cost. This storage
pool uses CMEFD disk volumes created from 4+1 and 8+1 RAID 5 disk
groups for use with MirrorView/Synchronous.
cmefd_r5
Designed for high performance and availability at medium cost. This storage
pool uses two CMEFD disk volumes in a RAID 1/0 configuration for use with
MirrorView/Synchronous.
cmefd_r10
RAID groups and storage characteristics
The following table correlates the storage array to the RAID groups for system-defined
storage pools.
AVM storage pool and disk type options 33
Concepts
Table 6. RAID group combinations
RAID 1 RAID 6 RAID 5 Storage
1+1 RAID 1/0 4+2 RAID 6 2+1 RAID 5
3+1 RAID 5
4+1 RAID 5
5+1 RAID 5
NX4 SAS or
SATA
1+1 RAID 1 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
8+1 RAID 5
NS20 /
NS40 /
NS80 FC
Not supported 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
6+1 RAID 5
8+1 RAID 5
NS20 /
NS40 /
NS80 ATA
1+1 RAID 1/0 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
8+1 RAID 5
NS-120 /
NS-480 /
NS-960 FC
1+1 RAID 1/0 4+2 RAID 6
6+2 RAID 6
12+2 RAID 6
4+1 RAID 5
6+1 RAID 5
8+1 RAID 5
NS-120 /
NS-480 /
NS-960 ATA
1+1 RAID 1/0 Not supported 4+1 RAID 5
8+1 RAID 5
NS-120 /
NS-480 /
NS-960 EFD
1+1 RAID 1/0 4+2 RAID 6
6+2 RAID 6
3+1 RAID 5
4+1 RAID 5
6+1 RAID 5
8+1 RAID 5
VNX SAS
Not supported 4+2 RAID 6
6+2 RAID 6
Not supported VNX NL SAS
34 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
User-defined storage pools
For some customer environments, more user control is required than the system-defined
storage pools offer. One way for administrators to have more control is to create their own
storage pools and define the attributes of the storage pool.
AVM user-defined storage pools allow you to have more control over how the storage is
allocatedto file systems. Administrators can create a storage pool. They can also addvolumes
to the storage pool either by manually selecting and building the volume structure, or by
auto-selection, extending the storage pool with newvolumes when required, and removing
volumes that are no longer required in the storage pool.
Auto-selection is performed by choosing a minimumsize and a systempool which describes
the disk attributes. With auto-selection, whole disk volumes are taken from the volumes
available in the system pool and placed in the user pool according to the selected stripe
options. The auto-selection uses the same AVM algorithms that choose which disk volumes
to stripe in a system pool. When extending a user-defined storage pool, AVM references the
last pool member's volume structure andmakes the best effort to keepthe underlying volume
structures consistent. System-defined storage pool volume and storage profiles on page 39
contains additional information.
While user-defined storage pools have attributes similar to system-defined storage pools,
user-defined storage pools are not dynamic. They require administrators to explicitly add
and remove volumes manually.
If you define the storage pool, you must also explicitly add and remove storage from the
storage pool and define the attributes for that storage pool. Use the nas_pool command to
do the following:

List, create, delete, extend, shrink, and view storage pools.

Modify the attributes of storage pools.


Create file systems with AVM on page 70 and Chapter 4 provide more information.
Understanding how AVM storage pools work enables you to determine whether
system-defined storage pools, user-defined storage pools, or both, are appropriate for the
environment. It is also important to understand the ways in which you can modify the
storage-pool behavior to suit your file system requirements. Modify system-defined and
user-defined storage pool attributes on page 109 provides a list of all the attributes and the
procedures to modify them.
Storage pool attributes
System-defined and user-defined storage pools have attributes that control how they create
volumes and file systems. Table 7 on page 36 lists the storage pool attributes, their values,
whether an attribute is modifiable and for which storage pools, and a description of the
attribute. The system-defined storage pools are shipped with the VNX system. They are
designed to optimize performance based on the hardware configuration. Each of the
AVM storage pool and disk type options 35
Concepts
system-defined storage pools has associated profiles that define the kind of storage used,
and how new storage is added to, or deleted from, the storage pool.
Table 7. Storage pool attributes
Description Modifiable Values Attribute
Unique name. If a name is not specified
during creation, one is automatically gener-
ated.
Yes
User-defined storage pools
Quoted string
name
A text description.
Default is (blank string).
Yes
User-defined storage pools
Quoted string
description
Access control level. Yes
User-defined storage pools
Integer. For exam-
ple, 0.
acl
Controlling Access to System Objects on
VNX contains instructions to manage access
control levels.
Indicates whether AVM can slice member
volumes to meet the file system request.
A y entry tells AVM to create a slice of exact-
ly the correct size from one or more member
volumes.
An n entry gives the primary or source file
system exclusive access to one or more
member volumes.
Note: If using TimeFinder or automatic file
system extension, this attribute should be
set to n. You cannot restore file systems
built with sliced volumes to a previous state
by using TimeFinder/FS.
Yes
System-defined and user-de-
fined storage pools
"y" | "n"
default_slice_flag
Note: This attribute is applicable only if
volume_profile is not blank.
Indicates whether this storage pool is al-
lowed to automatically add or remove
member volumes. The default value is n.
Yes
System-defined storage pools
"y" | "n"
is_dynamic
36 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Table 7. Storage pool attributes (continued)
Description Modifiable Values Attribute
Note: This attribute is applicable only if
volume_profile is not blank.
Yes
System-defined and user-de-
fined storage pools
"y" | "n"
is_greedy
Indicates whether a storage pool is greedy.
This option works differently depending on
whether you are using a system-defined
storage pool or user-defined storage pool.
System-defined storage pools
When a storage pool receives a request for
space, a greedy storage pool attempts to
create a new member volume before
searching for free space in existing member
volumes. The attribute value for this storage
pool is y.
A storage pool that is not greedy uses all
available space in the storage pool before
creating a new member volume. The at-
tribute value for this storage pool is n.
Note: When extending a file system, AVM
searches for free space on the existing vol-
umes that the file system is currently using
and ignores the is_greedy attribute value. If
there is not enough free space available,
AVM first uses the available space of the
existing volumes of the file system, and then
uses the is_greedy attribute value to deter-
mine where to look for the remaining space.
Storage pool attributes 37
Concepts
Table 7. Storage pool attributes (continued)
Description Modifiable Values Attribute
User-defined storage pools
If set to n (default), the system uses space
from the user-defined storage pool's existing
member volumes in the order that the vol-
umes were added to the pool to create a
new file system or extend an existing file
system.
If set to y, the system uses space from the
least-used member volume in the user-de-
fined storage pool to create a new file sys-
tem. When there is more than one least-
used member volume available, AVM se-
lects the member volume that contains the
most disk volumes.
For example, if one member volume con-
tains four disk volumes and another member
volume contains eight disk volumes, AVM
selects the one with eight disk volumes. If
there are two or more member volumes that
have the same number of disk volumes,
AVM selects the one with the lowest ID.
The system-defined storage pools are designed for use with the Symmetrix and VNX for
block storage systems. The structure of volumes created by AVM might differ greatly
depending on the type of storage system that is used by the various storage pools. This
difference allows AVMto exploit the architecture of current and future block storage devices
that are attached to the VNX for file.
Figure 1 on page 39 shows how the different storage pools are associated with the disk
volumes for each storage-system type attached. The nas_disk -list command lists the disk
volumes. These are the representation of the VNX for file LUNs that are exported from the
attached storage system.
38 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Note: Any given disk volume must be a member of only one storage pool.
VNX-000014
Disk
volumes in
the storage
pools
dm dn dy dz dn d4 d3
Symmetrix
storage
system
VNX for block
storage
system
dx
AVM storage pools
clarata_archive
symm_std
cmata_r3
cmata_r6
symm_std_rdf_src
clar_r1
clar_r5_performance
clar_r5_economy
Storage
systems
cmata_archive
clarata_r3
Figure 1. AVM system-defined storage pools
System-defined storage pool volume and storage profiles
Volume profiles are the set of rules and parameters that define how new storage is added
to a system-defined storage pool. A volume profile defines a standard method of building
a large section of storage from a set of disk volumes. This large section of storage can be
added to a storage pool that might contain similar large sections of storage. The
system-defined storage pool is responsible to satisfy requests for any amount of storage.
Users cannot create or delete system-defined storage pools and their associated profiles.
However, users can list, view, andextendthe system-definedstorage pools, andalso modify
storage pool attributes.
Volume profiles have an attribute named storage_profile. A volume profile's storage profile
defines the rules and attributes that are used to aggregate some number of disk volumes
(listed by the nas_disk -list command) into a volume that can be added to a system-defined
storage pool. A volume profile uses its storage profile to determine the set of disk volumes
to select (or match existing VNX disk volumes), where a given disk volume might match
the rules and attributes of a storage profile.
System-defined storage pool volume and storage profiles 39
Concepts
The followingsections explainhowthese profiles helpsystem-definedstorage pools aggregate
the disk volumes into storage pool members, place the members into storage pools, and
then build file systems for each storage-system type:

VNX for block system-defined storage pool algorithms on page 40

VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 SATA
support on page 43

VNX for block system-defined storage pools for Flash support on page 45

Symmetrix system-defined storage pools algorithm on page 46

VNX for block primary pool-based file system algorithm on page 48

VNX for block secondary pool-based file system algorithm on page 50

Symmetrix mapped pool file systems on page 51


When using the system-definedstorage pools without modifications by using the Unisphere
software or the VNX CLI, this activity is transparent to users.
VNX for block system-defined storage pool algorithms
When you create a file system that requires new storage, AVM attempts to create the most
optimal stripe volume for a VNX for block storage system. System-defined storage pools
for VNXfor block storage systems work with LUNs of a specific type, for example, 4+1 RAID
5 LUNs for the clar_r5_performance storage pool.
VNX for block integrated models use storage templates to create the LUNs that the VNX
for file recognizes as disk volumes. VNX for block storage templates are a combination of
template definition files and scripts that create RAID groups and bind LUNs on VNX for
block storage systems. You see only the scripts, not the templates. These storage templates
are invoked by using the VNX for block root-only setup script or by using the Unisphere
software.
Disk volumes exported from a VNX for block storage system are relatively large. A VNX
for block systemalso has two storage processors (SPs). Most VNXfor block storage templates
create two LUNs per RAID group, one owned by SP A and the other by SP B. Only the VNX
for block RAID 3 storage templates create both LUNs that are owned by one of the SPs.
If no disk volumes are found when a request for space is made, AVM considers the storage
pool attributes, and initiates the next step based on these settings:

The is_greedy setting indicates if the system-definedstorage pool must adda newmember
volume to meet the request, or if it must use all the available space in the storage pool
before adding a new member volume. AVM then checks the is_dynamic setting.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough
free space on the existing volumes that the file system is using. Table 7 on page 36 describes the
is_greedy behavior.

The is_dynamic setting indicates if the storage pool can dynamically grow and shrink:
40 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
If set to yes, then it allows AVM to automatically add a member volume to meet the
request.

If set to no, and a member volume must be added to meet the request, then the user
must manually add the member volume to the storage pool.

The flag that requests a file system slice indicates if the file system can be built on a slice
volume from a member volume.

The default_slice_flag setting indicates if AVM can slice storage pool member volumes
to meet the request.
Most of the system-defined storage pools for VNX for block storage systems first search for
four same-size disk volumes fromdifferent buses, different SPs, and different RAIDgroups.
The absolute criteria that the volumes must meet are as follows:

Disk volume cannot exceed 14 TB.

Disk volume must match the type specified in the storage profile of the storage pool.

Disk volumes must be of the same size.

No two disk volumes can come from the same RAID group.

Disk volumes must be on a single storage system.


If found, AVM stripes the LUNs together and inserts the stripe into the storage pool.
If AVM cannot find the four disk volumes that are bus-balanced, it looks for four same-size
disk volumes that are SP-balanced from different RAID groups. If not found, AVM then
searches for four same-size disk volumes from different RAID groups.
Next, if AVM has been unable to satisfy these requirements, it looks for three same-size disk
volumes that are SP-balanced from different RAID groups, and so on, until the only option
left is for AVM to use one disk volume. The criteria that the one disk volume must meet are
as follows:

Disk volume cannot exceed 14 TB.

Disk volume must match the type specified in the storage profile of the storage pool.

If multiple volumes match the first two criteria, then the disk volume must be from the
least-used RAID group.
System-defined storage pool volume and storage profiles 41
Concepts
Figure 2 on page 42 shows the algorithmusedto select disk volumes to addto a pool member
in an AVM VNX for block system-defined storage pool, which is either clar_r1,
clar_r5_performance, or clar_r5_economy.
1 disk volume
available?
CNS-000885
Request
Error.
Unable to fill
request
Done
Insert stripe
intothe
storage pool
Meets absolute
criteria for multiple
disk volumes?
Slice fromstripe
(smaller of free
space available
or file system
request)
Place meta
volume on
the stripe
Place disk
volumes in pool
(no stripe or
meta on top)
Least used definedby # of disk volumes used in
RAIDgroup/# disk volumes visible in RAIDgroup
No
Meets
absolute criteria
for 1 disk
volume
4/3/2 disk
volumes
available?
Is space in
pool enough?
Yes
1
1
Stripe volumes
together using
8 K stripe size
Select volumes
balanced across
buse s
balanced across
storage processor s
from least used
RAID groups
that are:
Yes
Yes
No
Yes Yes
No
No
No
Figure 2. clar_r1, clar_r5_performance, and clar_r5_economy storage pools algorithm
Figure 3 on page 42 shows the structure of a clar_r5_performance storage pool. The volumes
in the storage pools are balanced between SP A and SP B.
VNX-000015
dz dn dx dy
stripe_volume1 stripe_volume2
clar_r5_performance
storage pool
VNX
4+1 RAID 5 disk
volumes
dw
3
dm
3
Owned by
storage
processor A
Owned by
storage
processor B
Figure 3. clar_r5_performance storage pool structure
42 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID
1/0 SATA support
The three VNX for block system-defined storage pools that provide support for the SATA
environment are clarata_archive (RAID 5), clarata_r3 (RAID 3), and clarata_r10 (RAID 1/0).
The clarata_r3 storage pool follows the basic VNX for block algorithm explained in
System-defined storage pool volume and storage profiles on page 39, but uses only one
disk volume and does not allow striping of volumes. One of the applications for this pool
is backup to disk. Users can manage the RAID 3 disk volumes manually in a user-defined
storage pool. However, usingthe system-definedstorage pool clarata_r3 helps users maximize
the benefit from AVM disk selection algorithms. The clarata_r3 storage pool supports only
VNX for block Capacity drives, not Performance drives.
The criteria that the one disk volume must meet are as follows:

Disk volume cannot exceed 14 TB.

Disk volume must match the type specified in the storage profile of the storage pool.

If multiple volumes match the first two criteria, then the disk volume must be from the
least-used RAID group.
System-defined storage pool volume and storage profiles 43
Concepts
Figure 4 on page 44 shows the storage pool clarata_r3 algorithm.
1 disk volume
available?
Meets
absolute criteria
for 1 disk
volume.
Create meta on
disk volume.
Place meta in
storage pool.
Request
Done
CNS-000886
Error.
Unable to fill
request.
Error.
Unable to fill
request.
Yes
Yes
Yes
No
No
1
Figure 4. clarata_r3 storage pool algorithm
The storage pools clarata_archive and clarata_r10 differ from the basic VNX for block
algorithm. These storage pools use two disk volumes or a single disk volume, andall Capacity
drives are the same.
44 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Figure 5 on page 45 shows the profile algorithm used to select disk volumes by using either
the clarata_archive or clarata_r10 storage pool.
Receive
request
Put request on meta
Concatenate slices
together (if
necessary)
Request new pool
volume made of N
disk volumes
Sort N pool volumes
by utilization
Slice minimum of free
space available or
space needed
from pool entry
Space req
satisfied?
Disk volume
available?
Other Pool
volumes
available?
Error: Unable to
fill request
Done
N = 2
Pick first
entry
One volume created
Creation failed
Yes Yes
Yes
No No
No
No Yes, N = 1
CNS-000783
1
Pool
volume
created in
1?
Figure 5. clarata_archive and clarata_r10 storage pools algorithm
VNX for block system-defined storage pools for Flash support
The VNX for file provides the clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage
pools for Flash drive support on the VNX for block storage system. AVM uses the same disk
selection algorithmandvolume structure for each Flash pool. However, the algorithmdiffers
fromthe standardVNXfor block algorithmexplainedinSystem-definedstorage pool volume
and storage profiles on page 39 and is outlined next. The algorithm adheres to EMC best
practices to achieve the overall best performance and use of Flash drives. Users can also
manually manage Flash drives in user-defined pools.
The AVMalgorithmusedfor disk selection andvolume structure for all Flash system-defined
pools is as follows:
1. The LUN creation process is responsible for storage processor balancing. By default, run
the setup_clariion commandon integrated systems to set up storage processor balancing.
2. Use a default stripe width of 256 KB (provided in the profile). The stripe member count
in the profile is ignored and should be left at 1.
3. When two or more LUNs of the same size are available, always stripe LUNs. Otherwise,
concatenate LUNs.
4. No RAID group balancing or RAID group usage is considered.
5. No order is applied to the LUNs being striped together except that all LUNs from the
same RAID group in the stripe will be next to each other. For example, storage processor
balanced order is not applied.
6. Use a maximum of two RAID groups from which to take LUNs:
System-defined storage pool volume and storage profiles 45
Concepts
If only one RAID group is available, use every same size LUN in the RAID group.
This maximizes the LUN count and meets the size requested.
a.
b. If only two RAIDgroups are available, use every same size LUNin each RAIDgroup.
This maximizes the LUN count and meets the size requested.
Figure 6 on page 46 shows the profile algorithm used to select disk volumes by using either
the clarefd_r5, clarefd_r10, cmefd_r5, or cmefd_r10 storage pool.
Done
User requests
EFD FS
Sort all EFD dVols
into size buckets
Logically construct all possible
stripes that closest fit the OSEC
Sort all possible stripes
by capacity (descending)
within size buckets
If still more than 1 best stripe
sort the best stripes
by # of RGs (descending)
preferring 2 over 1
If still more than 1 best stripe
just use the first one
Create slice of
requested FS size
Find an existing stripe
that has available space
and create a new slice
Choose the size bucket
with the smallest available
capacity that meets request
Concatenate with
additional existing
stripes if necessary
Concatenate stripes
until requested
size is met
Done
Done
Done
Done
For each
size bucket
Determine optimum
stripe element count
=
#dVols/round(#RGs/2)
Sort all possible stripes
(that are big enough)
by # of dVols (descending)
and use the widest
Choose the size bucket
with the largest
available capacity
Concatenate stripes until
requested size is met or
all stripes are used
Concatenate with existing
stripes until requested
FS size is met
Next size bucket
CNS-001556
NO YES
YES
YES NO
NO YES
Out of free dVols
N
e
x
t
s
i
z
e
b
u
c
k
e
t
NO
Are
there any
free dVols
Has
requested size
been met
Are any size
buckets equal to or larger than
requested FS size
Are any possible
stripes equal or larger than
requested FS size
If still more than 1 best stripe
sort the best stripes
by size (ascending)
and use the smallest
Figure 6. clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools algorithm
Symmetrix system-defined storage pools algorithm
AVM works differently with Symmetrix storage systems because of the size and uniformity
of the disk volumes involved. Typically, the disk volumes exported from a Symmetrix
storage system are small and uniform in size. The aggregation strategy used by Symmetrix
storage pools is primarily to combine many small disk volumes into larger volumes that
can be used by file systems. AVM attempts to distribute the input/output (I/O) to as many
Symmetrix directories as possible. The Symmetrix storage systemcan use slicing andstriping
to distribute I/O among the physical disks on the storage system. This is less of a concern
for the AVM aggregation strategy.
ASymmetrix storage pool creates a stripe volume across one set of Symmetrix disk volumes,
or creates a metavolume, as necessary to meet the request. The stripe or metavolume is
added to the Symmetrix storage pool. When the administrator asks for a specific number
of gigabytes of space fromthe Symmetrix storage pool, the requestedsize of space is allocated
from this system-defined storage pool. AVM adds to and takes from the system-defined
46 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
storage pool as required. The stripe size is set in the system-defined profiles. You cannot
modify the stripe size of a system-definedstorage pool. The default stripe size for Symmetrix
storage pool is 256 KB. Multipath file system (MPFS) requires a stripe depth of 32 KB or
greater.
The algorithm that AVM uses looks for a set of eight disk volumes. If the set of eight is not
found, then the algorithm looks for a set of four disk volumes. If the set of four is not found,
then the algorithm looks for a set of two disk volumes. If the set of two disk volumes is not
found, then the algorithmlooks for one disk volume. AVMstripes the disk volumes together,
if the disk volumes are all of the same size. If the disk volumes are not the same size, AVM
creates a metavolume on top of the disk volumes. AVM then adds the stripe or the
metavolume to the storage pool.
If AVM cannot find any disk volumes, it looks for a metavolume in the storage pool that
has space, takes a slice from that metavolume, and makes a metavolume over that slice.
Figure 7 on page 47 shows the AVM algorithm used to select disk volumes by using a
Symmetrix system-defined storage pool.
First time
through
loop?
Take a slice from the
meta (smaller of free
space avail or FS
request)
Make meta
on slice
Error. Unable
to fill FS
request
1
CNS-000777
No
Received FS
request
Yes
Yes No
No
Yes
Stripe the
LUNs together,
or build a
meta on top
of the LUNs
Is there a
meta in the pool
with space
remaining?
Is there a set
of 8/4/2/1 disk
volumes?
1
No
Done Yes
Disk space
requirement
satisfied?
Concentrate
new volume to
end of
"in progress"
meta
Build FS
on meta
Figure 7. Symmetrix storage pool algorithm
System-defined storage pool volume and storage profiles 47
Concepts
Figure 8 on page 48 shows the structure of a Symmetrix storage pool.
CNS-000784
stripe_volume1 stripe_volume2
d3 d4 d5
d6
3
d7
3
d8
3
d9
3
d10
3
Symmetrix
storage pool
Symmetrix
STD disk
volumes
Figure 8. Symmetrix storage pool structure
All this system-defined storage pool activity is transparent to users and provides an easy
way to create and manage file systems. The system-defined storage pools do not allowusers
to have much control over howAVMaggregates storage to meet file systemneeds, but most
users prefer ease-of-use over control.
When users make a request for a newfile systemthat uses the system-defined storage pools,
AVM does the following:
1. Determines if more volumes need to be added to the storage pool. If so, selects and adds
volumes.
2. Selects an existing, available storage pool volume to use for the file system. The volume
might also be sliced to obtain the correct size for the file system request. If the request is
larger than the largest volume, AVMconcatenates the volumes to create the size required
to meet the request.
3. Places a metavolume on the resulting volume and builds the file system within the
metavolume.
4. Returns the file system information to the user.
All system-defined storage pools have specific, predictable rules for getting disk volumes
into storage pools, provided by their associated profiles.
VNX for block primary pool-based file system algorithm
AVM uses the primary pool-based algorithm as follows:
1. Striping is tried first. If disk volumes cannot be striped, then concatenation is tried.
2. AVM checks for free disk volumes:

If there are no free disk volumes and the slice option is set to no, there is not enough
space available and the request fails.

If there are free disk volumes:


a. AVM sorts them by thin and thick disk volumes.
b. AVM sorts the thin and thick disk volumes into size groups.
48 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
3. AVM first checks for thick disk volumes that satisfy the target number of disk volumes
to stripe. Default=5.
4. AVM tries to stripe five disk volumes together, with the same size, same data service
policies, and in an SP-balanced manner. If five disk volumes cannot be found, AVM tries
four, then three, and then two.
5. AVM selects SP-balanced disk volumes before selecting the same data service policies.
6. If no thick disk volumes are found, AVM then checks for thin disk volumes that satisfy
the target number.
7. If the space needed is not found, AVM uses the VNX for block secondary pool-based file
system algorithm on page 50 to look for the space.
Note:

For file systemextension, AVMalways tries to expand onto the existing volumes of the file system.
However, if there is not enough space to fulfill the size request on the existing volumes, additional
storage is obtained using the above algorithmand AVMattempts to match the data service policies
of the first used disk volume of the file system.

All volumes mentioned above, whether a stripe or a concatenation, are sliced by default.
Figure 9 on page 49 shows the VNX for block primary pool-based file system algorithm.
User requests
file system on a
mapped pool
Are
there any free
dVols?
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
No
No
No
No
No
No
Sort all dVols into thin
and thick buckets.
Then sort thin and
thick buckets into
size groups.
Is count < 2 ?
Is slice option
specified as Y?
Not enough space to
fulfill request. Fail the
request, and clean
up pool if necessary.
Is there
enough free space in
pool to meet size
requirement?
Are both the
SP-balanced condition
and the matching data
service condition
relaxed?
Can AVM find target
number of dVols that match
the SP-balanced and data
service matching
requirements?
Is
size request
met?
Done
Search thick buckets first
for target number of dVols
to stripe (default=5). If not
found, then search thin size
buckets for match.
Look first for dVols
that have matching
data services and
are SP balanced.
Try to find target number
of dVols again that
match the SP-balanced
and data service
requirements.
Reset target count to 5
and relax data service
matching requirement.
Then relax SP-balanced
requirement.
CNS-001913
Use the secondary
pool-based LUNs
strategy.
Reduce target
count by 1.
Done
Done
Done
Figure 9. VNX for block primary pool-based file systems
System-defined storage pool volume and storage profiles 49
Concepts
VNX for block secondary pool-based file system algorithm
AVM uses the secondary pool-based algorithm as follows:
1. Concatenation will be used. Striping will not be used.
2. Unless requested, slicing will not be used.
3. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes.
4. AVM checks for free disk volumes:

If there are no free disk volumes and the slice option is set to no, there is not enough
space available and the request fails.

If there are free disk volumes:


a. AVM first checks for thick disk volumes that satisfy the size request (equal to or
greater than the file system size).
b. If not found, AVM then checks for thin disk volumes that satisfy the size request.
c. If still not found, AVM combines thick and thin disk volumes to find ones that
satisfy the size request.
5. If one disk volume satisfies the size request exactly, AVM takes the selected disk volume
and uses the whole disk to build the file system.
6. If a larger disk volume is found which is a better fit than any set of smaller disks, then
AVM uses the larger disk volume.
7. If multiple disk volumes satisfy the size request, AVM sorts the disk volumes from
smallest to largest, and then sorts in alternating SP A and SP B lists. Starting with the
first disk volume, AVM searches through a list for matching data services until the size
request is met. If the size request is not met, AVM searches again but ignores the data
services.
Note: Mappedpools are treatedas standardAVMpools, not as user-definedpools, except that mapped
pools are always dynamic and the is_greedy option is ignored.
50 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Figure 10 on page 51 shows the VNX for block secondary pool-based file system algorithm.
Yes
No
No
No
Yes
No
Yes
Yes
No
CNS-001894
User requests
file system on a
mapped pool
Is best fit a
single dvol?
Sort all dVols
into thin and thick buckets
Look for existing volumes that can
be sliced to meet the size. Will try
for consistent data service first, and
then will look for anything that
is available
Search thick bucket for best fit.
If not found, search thin bucket
for best fit. If not found, combine
thin and thick and look for best fit.
Find all smaller dvols, sort from
smallest to largest, and then sort
into alternating SPA/SPB list
Either slice or use whole disk,
create meta on slice or disk, and
then create file system on meta.
Starting with first dVol, search
through list for matching data
service until size is met.
Select larger LUN for data
service consistency and build
file system on the single LUN
Are
there any free
dVols?
Is size
request met?
Not enough space to fulfill
request. Fail the request, and
clean up pool if necessary.
Done
Done
Is slice
option specified
as Y?
Yes
Yes
No
Is size
request met?
Does a
larger LUN exist?
Try to find a smaller
set of LUNs which
satisfy the size but
are not data
service consistent
Figure 10. VNX for block secondary pool-based file systems
Symmetrix mapped pool file systems
AVM builds a Symmetrix mapped pool file system as follows:
1. Unless requested, slicing will not be used.
2. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes for
the purpose of striping together the same type of disk volumes:

If there are no free disk volumes and the slice option is set to no, there is not enough
space available and the request fails.

If there are free disk volumes:


a. AVM first checks for a set of eight disk volumes.
b. If a set of eight is not found, AVM then looks for a set of four disk volumes.
c. If a set of four is not found, AVM then looks for a set of two disk volumes.
d. If a set of two is not found, AVM finally looks for one disk volume.
System-defined storage pool volume and storage profiles 51
Concepts
3. When free disk volumes are found:
a. AVM first checks for thick disk volumes that satisfy the size request, which can be
equal to or greater than the file system size. If thick disk volumes are available, AVM
first tries to stripe the thick disk volumes that have the same disk type. Otherwise,
AVM stripes together thick disk volumes that have different disk types.
b. If thick disks are not found, AVM then checks for thin disk volumes that satisfy the
size request. If thin disk volumes are available, AVM first tries to stripe the thin disk
volumes that have the same disk type, where "same" means the single disk type of
the pool in which it resides. Otherwise, AVM stripes together thin disk volumes that
have different disk types.
c. If thin disks are not found, AVM combines thick and thin disk volumes to find ones
that satisfy the size request.
4. If neither thick nor thin disk volumes satisfy the size request, AVM then checks for
whether striping of one same disk type will satisfy the size request, ignoring whether
the disk volumes are thick or thin.
5. If still no matches are found, AVM checks whether slicing was requested:
a. If slicing was requested, then AVM checks whether any stripes exist that satisfy the
size request. If yes, then AVM slices an existing stripe.
b. If slicing was not requested, AVM checks whether any free disk volumes can be
concatenated to satisfy the size request. If yes, AVM concatenates disk volumes,
matching data services if possible, and builds the file system.
6. If still no matches are found, there is not enough space available and the request fails.
Note: Mappedpools are treatedas standardAVMpools, not as user-definedpools, except that mapped
pools are always dynamic and the is_greedy option is ignored.
52 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Figure 11 on page 53 shows the Symmetrix mapped pool algorithm.
User requests
file system on a
mapped pool
Yes
Yes
No No
Yes
Yes
No
No
No
No
No
Yes
Yes
Yes
No
Yes
Is size
request met from thick
stripes of the same
disktype?
Is size
request met from thin
stripes of the same
disktype?
Can stripes of
the same disktype satisfy
space request if thin and
thick are ignored?
Sort all dVols into thick
and thin buckets
Not enough space to fulfill
request. Fail the request,
and remove unused volumes
if necessary.
Slice existing stripe to
fulfill space request.
Concatenate dVols,
matching data service
if possible, and
create file system.
Now apply standard Symm1 AVM
strategy to each of these buckets
starting with the thick buckets first.
Done
CNS-001895
Are
there any free
dVols?
Is slice option
specified as Y?
Was slice option
Y specified?
Can any free
dVols be concatenated
to fulfill space request
(no striping)?
Do any in use
stripes exist to fulfill
the size request?
Further divide thick and thin
buckets into disktype-specific
buckets for the purpose of
striping together like disktypes.
Done Done
Figure 11. Symmetrix mapped pool file systems
File system and storage pool relationship
When you create a file system that uses a system-defined storage pool, AVM consumes disk
volumes either by adding new members to the pool, or by using existing pool members.
To create a file system by using a user-defined storage pool, do one of the following:

Create the storage pool and add the volumes you want to use manually before creating
the file system.

Let AVM create the user pool by size.


Deleting a file system associated with either a system-defined or user-defined storage pool
returns the unused space to the storage pool. But the storage pool might continue to reserve
File system and storage pool relationship 53
Concepts
that space for future file system requests. Figure 12 on page 54 shows two file systems built
from an AVM storage pool.
Member volumes
Storage pool
CNS-000780
Metavolume
FS1
Slice
Metavolume
FS2
Slice
Figure 12. File systems built by AVM
As Figure 13 on page 54 shows, if FS2 is deleted, the storage used for that file system is
returned to the storage pool. AVM continues to reserve it, as well as any other member
volumes that are available in the storage pool, for a future request. This practice is true of
system-defined and user-defined storage pools.
Member volumes
Storage pool
CNS-000779
Metavolume
FS1
Slice
Figure 13. FS2 deletion returns storage to the storage pool
If FS1 is also deleted, the storage that was used for the file systems is no longer required.
A system-defined storage pool removes the volumes from the storage pool and returns the
disk volumes to the storage system for use with other features or storage pools. You can
change the attributes of a system-defined storage pool so that it is not dynamic, and will
not grow and shrink dynamically. By making this change, you increase your direct
involvement in managing the volume structure of the storage pool, including adding and
removing volumes.
A user-defined storage pool does not have any capability to add and remove volumes. To
use volumes contained in a user-defined storage pool for another purpose, you must remove
the volumes. Remove volumes from storage pools on page 122 provides more information.
Otherwise, the user-defined storage pool continues to reserve the space for use by that pool.
54 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Figure 14 on page 55 shows that the storage pool container continues to exist after the file
systems are deleted, and AVM continues to reserve the volumes for future requests of that
storage pool.
Member volumes
Storage pool
CNS-000778
Figure 14. FS1 deletion leaves storage pool container with volumes
If you have modified the attributes that control the dynamic behavior of a system-defined
storage pool, use the procedure inRemove volumes fromstorage pools onpage 122 to remove
volumes from the system-defined storage pool.
To reuse the volumes for other purposes for a user-definedstorage pool, remove the volumes
or delete the storage pool.
Automatic file system extension
Automatic file system extension works only when an AVM storage pool is associated with
a file system. You can enable or disable automatic extension when you create a file system
or modify the file system properties later.
Create file systems with AVM on page 70 provides the procedure to create file systems with
AVM system-defined or user-defined storage pools and enable automatic extension on a
newly created file system.
Enable automatic file system extension and options on page 91 provides the procedure to
modify an existing file system and enable automatic extension.
You can set the HWM and maximum size for automatic file system extension. The Control
Station might attempt to extend the file system several times, depending on these settings.
HWM
The HWM identifies the threshold for initiating automatic file system extension. The
HWM value must be between 50 percent and 99 percent. The default HWM is 90 percent
of the file system size.
Automatic extension guarantees that the file system usage is at least 3 percent below the
HWM. Figure 15 on page 58 contains the algorithmfor howthe calculation is performed.
For example, a 100 GB file system reaches its 80 percent HWM at 80 GB. The file system
then automatically extends to 110 GB and is now at 72.73 percent usage (80 GB), which
is well below the 80 percent HWM for the 110 GB file system:
Automatic file system extension 55
Concepts
If automatic extensionis disabled, whenthe file systemreaches the 90 percent (internal)
HWM, an event notification is sent. You must then manually extend the file system.
Ignoring the notification could cause data loss.
If automatic extensionis enabled, whenthe file systemreaches the HWM, anautomatic
extension event notification is sent to the sys_log and the file system automatically
extends without any administrative action. Calculating the automatic extension size
depends on the extend_size value and the current file system size:
extend_size = polling_interval*io_rate*100/(100-HWM)
where:
polling interval: default is 10 seconds
io_rate: default is 200 MB/s
HWM: value is set per file system
If a file systemis smaller than the extend_size value, it extends by its size when it reaches
the HWM.
If a file system is larger than the extend_size value, it extends by 5 percent of its size or
the extend_size, whichever is larger, when it reaches the HWM.
Examples
The following examples use file system sizes of 100 GB and 500 GB, and HWM values
of 80 percent, 85 percent, 90 percent, and 95 percent:
Example 1 100 GB file system, 85 percent HWM
extend_size = (10*200*100)/(100-85)
Result = 13.3 GB
13.3 GB is greater than 5 GB (which is 5 percent of 100 GB). Therefore, the file system
is extended by 13.3 GB.
Example 2 100 GB file system, 90 percent HWM
extend_size = (10*200*100)/(100-90)
Result = 20 GB
20 GB is greater than 5 GB (which is 5 percent of 100 GB). Therefore, the file system
is extended by 20 GB.
Example 3 500 GB file system, 90 percent HWM
extend_size = (10*200*100)/(100-90)
Result = 20 GB
20 GB is less than 25 GB (which is 5 percent of 500 GB). Therefore, the file system is
extended by 25 GB.
Example 4 500 GB file system, 95 percent HWM
56 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
extend_size = (10*200*100)/(100-95)
Result = 40 GB
40 GB is greater than 25 GB (which is 5 percent of 500 GB). Therefore, the file system
is extended by 40 GB.
Example 5 500 GB file system, 80 percent HWM
extend_size = (10*200*100)/(100-80)
Result = 10 GB
Since the total used space on the file systemafter this extension would be 78.4 percent
(400/510 *100), which is less than the (HWM-3) limit, the file system is extended by
a single 19.5 GB extension (400 * 100/77).
Maximum size
The default maximum size for any file system is 16 TB. The maximum size for automatic
file system extension is from 3 MB up to 16 TB. If thin provisioning is enabled and the
selectedstorage pool is a traditional RAIDgroup (non-virtual VNXfor block thin) storage
pool, the maximum size is required. Otherwise, this field is optional. The extension size
is also dependent on having additional space in the storage pool associated with the file
system.
Automatic file extension conditions
The conditions for automatically extending a file system are as follows:
If the file system size reaches the specified maximum size, the file system cannot
extend beyond that size, and the automatic extension operation is rejected.
If the available space is less than the extend size, the file system extends by the
maximum available space.
If only the HWMis set with automatic extension enabled, the file systemautomatically
extends when that HWM is reached, if there is space available and the file system
size is less than 16 TB.
If only the maximumsize is specifiedwith automatic extension enabled, the file system
automatically extends when the default HWM of 90 percent is reached, and the file
system has space available and the maximum size has not been reached.
If the file system reaches or exceeds the set maximum size, automatic extension is
rejected.
If the HWM or maximum file size is not set, but either automatic extension or thin
provisioning is enabled, the file system's HWM and maximum size are set to the
default values of 90 percent and 16 TB, respectively.
Automatic file system extension 57
Concepts
Calculating the size of an automatic file system extension
During each automatic file system extension, fs_extend_handler, located on the Control
Station (/nas/sbin/fs_extend_handler) calculates the extension size by using the algorithm
shown in Figure 15 on page 58.
Figure 15. Calculating the size of an automatic file system extension
58 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Thin provisioning
The thin provisioning option allows you to allocate storage capacity based on anticipated
needs, while you dedicate only the resources you currently need. Combining automatic file
system extension and thin provisioning lets you grow the file system gradually as needed.
When thin provisioning is enabled and a virtual storage pool is not being used, the virtual
maximum file system size is reported to NFS and CIFS clients. If a virtual storage pool is
being used, the actual file system size is reported to NFS and CIFS clients.
Note: Enabling thin provisioning with automatic file system extension does not automatically reserve
the space fromthe storage pool for that file system. Administrators must ensure that adequate storage
space exists so that the automatic extension operation can succeed. If the available storage is less than
the maximum size setting, automatic extension fails. Users receive an error message when the file
system becomes full, even though it appears that there is free space in the file system.
Planning considerations
This section covers important volume and file system planning information and guidelines,
interoperability considerations, storage pool characteristics, and upgrade considerations
that you need to know when implementing AVM and automatic file system extension.
Review these topics:

File system management and the nas_fs command

The EMC SnapSure feature (checkpoints) and the fs_ckpt command

VNXfor file volume management concepts (metavolumes, slice volumes, stripe volumes,
and disk volumes) and the nas_volume, nas_server, nas_slice, and nas_disk commands

RAID technology

Symmetrix storage systems

VNX for block storage systems


Interoperability considerations
When using automatic file system extension with replication, consider these guidelines:
Enable automatic extension and thin provisioning only on the source file system. The
destination file system is synchronized with the source and extends automatically.
Whenthe source file systemreaches its HWM, the destinationfile systemautomatically
extends first and then the source file system automatically extends.
Do one of the following:
Thin provisioning 59
Concepts
Set up the source replication file system with automatic extension enabled, as
explained in Create file systems with automatic file system extension on page 81.

Modify an existing source file system to automatically extend by using the


procedure Enable automatic file system extension and options on page 91.
If the extension of the destination file systemsucceeds but the extension of the source
file system fails, the automatic extension operation stops functioning. You receive an
error message that indicates the failure is due to the limitation of available disk space
on the source side. Manually extend the source file system to make the source and
destination file systems the same size by using the nas_fs -xtend <fs_name> -option
src_only command. Using VNX Replicator provides more detailed information on
correcting the failure.
Other interoperability considerations are:
The automatic extension and thin provisioning configuration is not moved over to
the destination file system during replication failover. If you intend to reverse the
replication, and the destination file system becomes the source, you must enable
automatic extension on the new source file system.
With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size of
the VNXReplicator destinationfile system, andthe clients see the virtually provisioned
maximumsize on the source file system. Table 8 on page 60 describes this client view.
Table 8. Client view of VNX Replicator source and destination file systems
Source file system with
thin provisioning
Source file system without
thin provisioning
Destination file system
Maximum size Actual size Actual size Clients see:
Using VNXReplicator contains more information on using automatic file systemextension
with VNX Replicator.
AVM storage pool considerations
Consider these AVM storage pool characteristics:
System-definedstorage pools have a set of rules that govern howthe systemmanages
storage. User-defined storage pools have attributes that you define for each storage
pool.
All system-definedstorage pools (virtual andnon-virtual) are dynamic. They acquire
and release disk volumes as needed. Administrators can modify the attribute to
disable this dynamic behavior.
User-defined storage pools are not dynamic. They require administrators to explicitly
add and remove volumes manually. You are allowed to choose disk volume storage
from only one of the attached storage systems when creating a user-defined storage
pool.
60 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Striping never occurs above the storage-pool level.
The system-defined VNX for block storage pools attempt to use all free disk volumes
before maximizing use of the partially used volumes. This behavior is considered to
be a greedy attribute. Youcan modify the attributes that control this greedy behavior
in system-defined storage pools. Modify system-defined storage pool attributes on
page 110 describes the procedure.
Note: When extending a file system, the is_greedy attribute is ignoredunless there is not enough
free space on the existing volumes that the file system is using. Table 7 on page 36 describes
the is_greedy behavior.
Another option is to create user-defined storage pools to group disk volumes to keep
system-defined storage pools fromusing them. Create file systems with user-defined
storage pools on page 74 provides more information on creating user-definedstorage
pools. You can create a storage pool to reserve disk volumes, but never create file
systems from that storage pool. You can move the disk volumes out of the reserving
user-defined storage pool if you need to use them for file system creation or other
purposes.
The system-defined Symmetrix storage pools maximize the use of disk volumes
acquired by the storage pool before consuming more. This behavior is considered to
be a "not greedy" attribute.
When creating a user-defined storage pool, the default is a "not greedy" behavior.
The systemuses space fromthe user-defined storage pool's existing volume members
in the order that the volumes were added to the pool to create a new file system or
extend an existing file system.
If a greedy attribute is set when creating a user-defined storage pool, then the pool
uses space from the least-used member volumes to create a new file system. When
there is more than one least-usedmember volume available, AVMselects the member
volume that contains the most disk volumes.
For example, if one member volume contains four disk volumes and another member
volume contains eight disk volumes, AVM selects the one with eight disk volumes.
If there are two or more member volumes that have the same number of disk volumes,
AVM selects the one with the lowest ID.
AVM does not perform storage system operations necessary to create new disk
volumes, but consumes only existing disk volumes. You might need to add LUNs to
your storage system and configure new disk volumes, especially if you create
user-defined storage pools.
A file system might use many or all the disk volumes that are members of a
system-defined storage pool.
You can use only one type of disk volume in a user-defined storage pool. For example,
if you create a storage pool and then add a disk volume based on Capacity drives to
the pool, add only other Capacity-based disk volumes to the pool to extend it.
Planning considerations 61
Concepts
SnapSure checkpoint SavVols might use the same disk volumes as the file system of
which the checkpoints are made.
By default, a checkpoint SavVol is sliced so that a SavVol auto-extension will not use
space unnecessarily.
AVM does not add members to the storage pool if the amount of space requested is
more than the sum of the unused and available disk volumes, but less than or equal
to the available space in an existing system-defined storage pool.
Some AVMsystem-definedstorage pools designedfor use with VNXfor block storage
systems acquire pairs of storage-processor balanced disk volumes with the same
RAID type, disk count, and size. When reserving disk volumes from a VNX for block
storage system, it is important to reserve them in similar pairs. Otherwise, AVM
might not findmatching pairs, andthe number of usable disk volumes might be more
limited than was intended.
To guarantee consistent file system performance, on the VNX for block system
configure a storage pool that uses the same data services that will map to an AVM
pool that uses the same data services on the VNX for file.
Because of the minimumstorage requirement restriction for a VNXfor block system's
storage pool, if you must create a heterogeneous pool that uses multiple data services
to satisfy different use cases, do the following:
1. Use a heterogeneous system-defined AVM pool to create user-defined pools that
group disk volumes with matching data service policies.
2. Create file systems from the user-defined pools.
For example, for one use case you might need to create both a regular file system and
an archive file system.
The system allows you to control the data service configuration at the file system
level. By default, disk volumes are not sliced unless you explicitly select that setting
at file system creation time. By not slicing a disk volume, the system guarantees that
a file system will not share disks with other file systems. There is a 1:n relationship
between the file system and the disk volumes, where n is greater than or equal to 1.
You can go to the VNX for block or Symmetrix storage system and modify the data
service policies of the set of LUNs underneath the same file systemto change the data
policy of the file system. This option may cause the file systemthat is created to exceed
the specifiedstorage capacity because the file systemsize will be disk volume-aligned.
Choose the LUNsize onthe VNXfor block or Symmetrix systemstorage pool carefully.
The pool-based LUN overhead is a collection of 2 percent of the file system capacity
size plus 3 GBfor a Direct Logical Unit (DLU), andfully provisionedThin LUN(TLU).
Create file systems with AVM on page 70 provides more information on creating file
systems by using the different pool types.
Related information on page 22 provides a list of related documentation.
62 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
Upgrading VNX for file software
When you upgrade to VNXfor file version 7.0 software, all system-defined storage pools
are available.
The system-definedstorage pools for the currently attachedstorage systems withavailable
space appear in the output when you list storage pools, even if AVM is not used on the
system. If you have not used AVM in the past, these storage pools are containers and do
not consume storage until you create a file system by using AVM.
If you have used AVM in the past, in addition to the system-defined storage pools, any
user-defined storage pools also appear when you list the storage pools.
Note: Automatic file system extension is interrupted during software upgrades. If automatic file
system extension is enabled, the Control Station continues to capture HWM events. However,
actual file system extension does not start until the upgrade process completes.
File system and automatic file system extension considerations
Before implementing AVM, consider your environment, most important file systems,
file system sizes, and expected growth. Follow these general guidelines when planning
to use AVM in your environment:
Create the most important and most-used file systems first. AVM system-defined
storage pools use free disk volumes to create a new file system. For example, there
are 40 disk volumes on the storage system. AVM takes eight disk volumes, creates
stripe1, slice1, metavolume1, and then creates the file system ufs1:
Assuming the default behavior of the system-defined storage pool, AVM uses
eight more disk volumes, creates stripe2, and builds file system ufs2, even though
there is still space available in stripe1.
File systems ufs1 and ufs2 are on different sets of disk volumes and do not share
any LUNs, for more efficient access.
If you plan to create user-defined storage pools, consider LUNselection and striping,
anddo your own disk volume aggregation before putting the volumes into the storage
pool. This ensures that the file systems are not built on a single LUN. Disk volume
aggregation is a manual process for user-defined storage pools.
For file systems with sequential I/O, two LUNs per file systemare generally sufficient.
If you use AVM for file systems with sequential I/O, consider modifying the attribute
of the storage pool to restrict slicing.
If you would like to control the data service configuration at the file system level but
still consider doing auto extension and thin provisioning, do one of the following:
Create a VNXfor block or Symmetrix storage pool with thin LUNs, andthen create
file systems from that pool.
Planning considerations 63
Concepts
Set the slice option to Yes if you want to enable file system auto extension.
Automatic file systemextension does not alleviate the need for appropriate planning.
Create the file systems with adequate space to accommodate the estimated usage.
Allocating too little space to accommodate normal file systemusage makes the Control
Station rapidly and repeatedly attempt to extend the file system. If the Control Station
cannot adequately extend the file system to accommodate the usage quickly enough,
the automatic extension fails. Known problems and limitations on page 126 provides
more information on how to identify and recover from this issue.
Note: When planning file system size and usage, consider setting the HWM, so that the free space
above the HWM setting is a certain percentage above the largest average file for that file system.
Use of AVM with a single-enclosure VNX for block storage system could limit
performance because AVMdoes not stripe between or across RAIDgroup0 andother
RAID groups. This is the only case where striping across 4+1 RAID 5 and 8+1 RAID
5 is suggested.
If you want to set a stripe size that is different from the default stripe size for
system-defined storage pools, create a user-defined storage pool. Create file systems
with user-defined storage pools on page 74 provides more information.
Take disk contention into account when creating a user-defined pool.
If you have disk volumes to reserve so that the system-defined storage pools do not
use them, consider creating a user-definedstorage pool andaddthose specific volumes
to it.
64 Managing Volumes and File Systems on VNX AVM 7.0
Concepts
3
Configuring
The tasks to configure volumes and file systems with AVM are as follows:

Configure disk volumes on page 66

Create file systems with AVM on page 70

Extend file systems with AVM on page 84

Create file system checkpoints with AVM on page 100


Managing Volumes and File Systems on VNX AVM 7.0 65
Configure disk volumes
System network servers that are gateway network-attached storage (NAS) systems and that
connect to Symmetrix and VNX for block storage systems are as follows:

VNX VG2

VNX VG8
The gateway system stores data on VNX for block user LUNs or Symmetrix hypervolumes.
If the user LUNs or hypervolumes are not configured correctly on the array, AVM and the
Unisphere for File software cannot be used to manage the storage.
Typically, an EMC Customer Support Representative does the initial setup of disk volumes
on these gateway storage systems.
However, if your VNX gateway system is attached to a VNX for block storage system and
you want to add disk volumes to the configuration, use the procedures that follow:
1. Use the Unisphere for Block software or the VNX for block CLI to create VNX for block
user LUNs.
2. Use either the Unisphere for File software or the VNX for file CLI to make the new user
LUNs available to the VNX for file as disk volumes.
The user LUNs must be created before you create file systems.
To add user LUNs, you must be familiar with the following:

Unisphere for Block software or the VNX for block CLI.

Process of creating RAID groups and user LUNs for the VNX for file volumes.
The documentation for Unisphere for Block and VNX for block CLI describes how to create
RAID groups and user LUNs.
If the disk volumes are configured by EMC experts, go to Create file systems with AVM on
page 70.
66 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Provide storage from a VNX or legacy CLARiiON system to a gateway
system
1. Create RAID groups and LUNs (as needed for VNX for file volumes) by using the
Unisphere software or VNX for block CLI:

Always create the user LUNs in balanced pairs, one owned by SP A and one owned
by SP B. The paired LUNs must be the same size.

FC or SAS disks must be configured as RAID 1/0, RAID 5, or RAID 6. The paired
LUNs do not need to be in the same RAID group but should be of the same RAID
type. RAID groups and storage characteristics on page 33 lists the valid RAID group
and storage array combinations. Gateway models use the same combinations as the
NS-80 (for CX3 storage systems) or the NS-960 (for CX4 storage systems).

SATA disks must be configured as RAID 1/0, RAID 5, or RAID 6. All LUNs in a RAID
groupmust belong to the same SP. Create pairs by using LUNs fromtwo RAIDgroups.
RAID groups and storage characteristics on page 33 lists the valid RAID group and
storage array combinations. Gateway models use the same combinations as the NS-80
(for CX3 storage systems) or the NS-960 (for CX4 storage systems).

The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs.
Use these settings when creating RAID group user LUNs:

RAID Type: RAID 1/0, RAID 5, or RAID 6 for FC or SAS disks and RAID 1/0, RAID
5, or RAID 6 for SATA disks

LUN ID: Select the first available value

Rebuild Priority: ASAP

Verify Priority: ASAP

Enable Read Cache: Selected

Enable Write Cache: Selected

Enable Auto Assign: Cleared (off)

Number of LUNs to Bind: 2

Alignment Offset: 0

LUN size: Must not exceed 14 TB


Note: If you create 4+1 RAID 3 LUNs, the Number of LUNs to Bind value should be 1.
2. Create a storage group to which to add the LUNs for the gateway system.

Using the Unisphere software:


a. Select Hosts Storage Groups.
Configure disk volumes 67
Configuring
b. Click Create.

Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup -create -gname <groupname>
3. Ensure that you add the LUNs to the gateway system's storage group. Set the HLU to
16 or greater.

Using the Unisphere software:


a. Select Hosts Storage Groups.
b. In Storage Group Name, select the storage group that you created in step 2.
c. Click Connect LUNs.
d. Click the LUNs tab.
e. Expand SP A and SP B.
f. Select the LUNs to add and click Add.

Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup -addhlu -gname ~filestorage -hlu
<HLU number> -alu <LUN number>
4. Perform one of these steps to make the new user LUNs available to the VNX for file:

Using the Unisphere for File software:


a. Select Storage Storage Configuration File Systems.
b. From the task list, select File Storage Rescan Storage Systems.

Using the VNX for file CLI, type the following command:
nas_diskmark -mark -all
Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This might
cause data loss or unavailability.
Create pool-based provisioning for file storage systems
1. Create storage pools and LUNs as needed for VNX for file volumes.
Use these settings when creating user LUNs for use with mapped pools:

LUN ID: Use the default


68 Managing Volumes and File Systems on VNX AVM 7.0
Configuring

LUN Name: Use the default or supply a name

Number of LUNS to create: 2

Enable Auto Assign: Cleared (Off)

Alignment Offset: 0

LUN Size: Must not exceed 16 TB


2. Ensure that you add the LUNs to the file system's storage group. Set the HLU to 16 or
greater.

Using the Unisphere software:


a. Select Hosts Storage Groups.
b. In Storage Group Name, select ~filestorage.
c. Click Connect LUNs.
d. Click LUNs.
e. Expand SP A and SP B.
f. Select the LUNs you want to add and click Add.

Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup -addhlu -gname ~filestorage -hlu
<HLU number> -alu <LUN number>
3. Use one of these methods to make the new user LUNs available to the VNX for file:

Using the Unisphere software:


a. Select Storage Storage Configuration File Systems.
b. From the task list, select File Storage Rescan Storage Systems.
Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning.
This might cause data loss or unavailability.

Using the VNX for file CLI, type the following command:
nas_diskmark -mark -all
Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This might
cause data loss or unavailability.
Configure disk volumes 69
Configuring
Add disk volumes to an integrated system
Configure unused or new disk devices on a VNX for block storage system by using the Disk
Provisioning Wizard for File. This wizardis available only for integratedVNXfor file models
(NX4 and NS non-gateway systems excluding NS80), including Fibre Channel-enabled
models, attached to a single VNX for block storage system.
Note: For VNX systems, Advanced Data Service Policy features such as FAST and compression are
supported on pool-based LUNs only. They are not supported on RAID-based LUNs.
To open the Disk Provisioning Wizard for File in the Unisphere software:
1. Select Storage Storage Configuration Storage Pools.
2. From the task list, select Wizards Disk Provisioning Wizard for File.
Note: To use the Disk Provisioning Wizard for File, you must log in to Unisphere by using the global
sysadmin user account or by using a user account which has privileges to manage storage.
An alternative to the Disk Provisioning Wizard for File is available by using the VNX for
file CLI at /nas/sbin/setup_clariion. This alternative is not available for unified VNXsystems.
The script performs the following actions:

Provisions the disks on integrated (non-Performance) VNX for block storage systems
when there are unbound disks to configure. This script binds the data LUNs on the xPEs
and DAEs, and makes them accessible to the Data Movers.

Ensures that your RAID groups and LUN settings are appropriate for your VNX for file
server configuration.
The Unisphere for File software supports only the array templates for legacy EMCCLARiiON
CX

and CX3 storage systems. CX4 and VNX systems must use the User_Defined mode
with the /nas/sbin/setup_clariion CLI script.
The setup_clariion script allows you to configure VNX for block storage systems on a
shelf-by-shelf basis by using predefined configuration templates. For each enclosure (xPE
or DAE), the script examines your specific hardware configuration and gives you a choice
of appropriate templates. You can mix combinations of RAID configurations on the same
storage system. The script then combines the shelf templates into a custom, User_Defined
array template for each VNX for block system, and then configures your array.
Create file systems with AVM
This section describes the procedures to create a file system by using AVM storage pools,
and also explains how to create file systems by using the automatic file system extension
feature.
70 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
You can enable automatic file system extension on new or existing file systems if the file
system has an associated AVM storage pool. When you enable automatic file system
extension, use the nas_fs command options to adjust the HWM value, set a maximum file
size to which the file system can be extended, and enable thin provisioning. Create file
systems with automatic file system extension on page 81 provides more information.
You can create file systems by using storage pools with automatic file system extension
enabled or disabled. Specify the storage system from which to allocate space for the type of
storage pool that is being created.
Choose any of these procedures to create file systems:

Create file systems with system-defined storage pools on page 72


Allows you to create file systems without having to also create the underlying volume
structure.

Create file systems with user-defined storage pools on page 74


Allows more administrative control of the underlying volumes and placement of the file
system. Use these user-defined storage pools to prevent the system-definedstorage pools
from using certain volumes.

Create file systems with automatic file system extension on page 81


Allows you to create a file system that automatically extends when it reaches a certain
threshold by using space from either a system-defined or a user-defined storage pool.
Create file systems with AVM 71
Configuring
Create file systems with system-defined storage pools
When you create a file system by using the system-defined storage pools, it is not necessary
to create volumes before setting up the file system. AVM allocates space to the file system
from the specified storage pool on the storage system associated with that storage pool.
AVMautomatically creates any requiredvolumes when it creates the file system. This process
ensures that the file system and its extensions are created from the same type of storage,
with the same cost, performance, and availability characteristics.
The storage system appears either alphabetic characters or as a set of integers:

VNX for block storage systems display as a prefix of alphabetic characters before a set
of integers, for example, FCNTR074200038-0019.

Symmetrix storage systems display as a set of integers, for example, 002804000190-003C.


To create a file system with system-defined storage pools:
1. Obtain the list of available system-defined storage pools and mapped storage pools by
typing:
$ nas_pool -list
Output:
id in_use acl name storage_system
3 n 0 clar_r5_performance FCNTR074200038
40 y 0 TP1 FCNTR074200038
41 y 0 FP1 FCNTR074200038
2. Display the size of a specific storage pool by using this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size of the clar_r5_performance storage pool, type:
$ nas_pool -size clar_r5_performance
Output:
id = 3
name = clar_r5_performance
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
Note: To display the size of all storage pools, use the -all option instead of the <name> option.
3. Obtain the system name of an attached Symmetrix storage system by typing:
72 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
$ nas_storage -list
Output:
id acl name serial number
1 0 000183501491 000183501491
4. Obtain information of a specific Symmetrix storage system in the list by using this
command syntax:
$ nas_storage -info <system_name>
where:
<system_name> = name of the storage system
Example:
To obtain information about the Symmetrix storage system 000183501491, type:
$ nas_storage -info 000183501491
Output:
type num slot ident stat scsi vols ports p0_stat p1_stat p2_stat p3_stat
R1 1 1 RA-1A Off NA 0 1 Off NA NA NA
DA 2 2 DA-2A On WIDE 25 2 On Off NA NA
DA 3 3 DA-3A On WIDE 25 2 On Off NA NA
SA 5 5 SA-5A On ULTRA 0 2 On On NA NA
SA 12 12 SA-12A On ULTRA 0 2 Off On NA NA
DA 14 14 DA-14A On WIDE 27 2 On Off NA NA
DA 15 15 DA-15A On WIDE 26 2 On Off NA NA
R1 16 16 RA-16A On NA 0 1 On NA NA NA
R2 17 1 RA-1B Off NA 0 1 Off NA NA NA
DA 18 2 DA-2B On WIDE 26 2 On Off NA NA
DA 19 3 DA-3B On WIDE 27 2 On Off NA NA
SA 21 5 SA-5B On ULTRA 0 2 On On NA NA
SA 28 13 SA-12B OnULTRA 0 2 On On NA NA
DA 30 14 DA-14B On WIDE 25 2 On Off NA NA
DA 31 15 DA-15B On WIDE 25 2 On Off NA NA
R2 32 16 RA-16B On NA 0 1 On NA NA NA
5. Create a file system by size with a system-defined storage pool by using this command
syntax:
$ nas_fs -name <fs_name> -create size=<size> pool=<pool> storage=<system_name>
where:
<fs_name> = name of the file system.
<size> = amount of space to add to the file system. Specify the size in GB by typing
<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or
in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool.
<system_name> = name of the storage system from which space for the file system is
allocated.
Example:
Create file systems with AVM 73
Configuring
To create a file system ufs1 of size 10 GB with a system-defined storage pool, type:
$ nas_fs -name ufs1 -create size=10G pool=symm_std storage=00018350149
Note: To mirror the file system with SRDF, you must specify the symm_std_rdf_src storage pool.
This directs AVM to allocate space from volumes configured when installing for remote mirroring
by using SRDF. Using SRDF/S with VNX for Disaster Recovery contains more information.
Output:
id = 1
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = avm1
pool = symm_std
member_of =
rw_servers =
ro_servers =
rw_vdms =
ro_vdms =
auto_ext = no,thin=no
deduplication= off
stor_devs = 00018350149
disks = d20,d12,d18,d10
Note: The VNXCommand Line Interface Reference for File contains information on the options available
for creating a file system with the nas_fs command.
Create file systems with user-defined storage pools
The AVM system-defined storage pools are available for use with the VNX for file. If you
require more manual control than the system-defined storage pools allow, create a
user-defined storage pool and then create the file system by using that pool.
Note: Create a user-defined storage pool and define its attributes to reserve disk volumes so that your
system-defined storage pools cannot use them.
Before you begin
Prerequisites include:

A user-defined storage pool can be created either by using manual volume management
or by letting AVMcreate the storage pool with a specified size. If you use manual volume
management, you must first stripe the volumes together and add the resulting volumes
to the storage pool you create. Managing Volumes and File Systems for VNX Manually
describes the steps to create and manage volumes.
74 Managing Volumes and File Systems on VNX AVM 7.0
Configuring

You cannot use disk volumes you have reserved for other purposes. For example, you
cannot use any disk volumes reserved for a system-defined storage pool. Controlling
Access to System Objects on VNX contains more information on access control levels.

AVMsystem-definedstorage pools designedfor use with VNXfor block storage systems


acquire pairs of use disk volumes that are storage-processor balanced and use the same
RAID type, disk count, and size. Modify system-defined and user-defined storage pool
attributes on page 109 provides more information.

When creating a user-defined storage pool to reserve disk volumes froma VNXfor block
storage system, use disk volumes that are storage-processor balanced and use the same
qualities. Otherwise, AVM cannot find matching pairs, and the number of usable disk
volumes might be more limited than was intended.
To create a file system with a user-defined storage pool:

Create a user-defined storage pool by volumes on page 76

Create a user-defined storage pool by size on page 76

Create the file system on page 78

Create file systems with automatic file system extension on page 81

Create file systems with the automatic file system extension option enabled on page 82
Create file systems with AVM 75
Configuring
Create a user-defined storage pool by volumes
To create a user-defined storage pool (from which space for the file system is allocated) by
volumes, add volumes to the storage pool and define the storage pool attributes.
Action
To create a user-defined storage pool by volumes, use this command syntax:
$ nas_pool -create -name <name> -acl <acl> -description <desc> -volumes
<volume_name>[,<volume_name>,...] -default_slice_flag {y|n}
where:
<name> = name of the storage pool.
<acl> = designates an access control level for the new storage pool. Default value is 0.
<desc> = assigns a comment to the storage pool. Type the comment within quotes.
<volume_name> = names of the volumes to add to the storage pool. Can be a metavolume, slice volume, stripe volume,
or disk volume. Use a comma to separate each volume name.
-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed from the
storage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannot be sliced,
and volumes specified cannot be built on a slice.
Example:
To create a user-defined storage pool named marketing with a description, with the disk members d126, d127, d128, and
d129 specified, and allow the volumes to be built on a slice, type:
$ nas_pool -create -name marketing -description "storage pool for marketing" -volumes
d126,d127,d128,d129 -default_slice_flag y
Output
id = 5
name = marketing
description = storage pool for marketing
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = CLSTD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Create a user-defined storage pool by size
To create a user-defined storage pool (from which space for the file system is allocated) by
size, specify a template pool, size of the pool, minimum stripe size, and number of stripe
members.
76 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Action
To create a user-defined storage pool by size, use this command syntax:
$ nas_pool -create -name <name> -acl <acl> -description <desc>
-default_slice_flag {y|n} -size <integer>[M|G|T] -storage <system_name>
-template <system_pool_name> -num_stripe_members <num_stripe_mem>
-stripe_size <num>
where:
<name> = name of the storage pool.
<acl> = designates an access control level for the new storage pool. Default value is 0.
<desc> = assigns a comment to the storage pool. Type the comment within quotes.
-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed
from the storage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannot
be sliced, and volumes specified cannot be built on a slice.
<integer> = size of the storage pool, an integer between 1 and 1024. Specify the size in GB (default) by typing <integer>G
(for example, 250G), in MB by typing <integer>M (for example, 500M), or in TB by typing <integer>T (for example, 1T).
<system_name> = storage system on which one or more volumes will be created and added to the storage pool.
<system_pool_name> = system pool template used to create the user pool. Required when the -size option is specified.
The user pool will be created by using the profile attributes of the specified system pool template.
<num_stripe_mem> = number of stripe members used to create the user pool. Works only when both the -size and
-template options are also specified. It overrides the number of stripe members attribute of the specified system pool
template.
<num> = stripe size used to create the user pool. Works only when both the -size and -template options are also specified.
It overrides the stripe size attribute of the specified system pool template.
Example:
To create a 20 GB user-defined storage pool that is named marketing with a description by using the clar_r5_performance
pool, and that contains 4 stripe members with a stripe size of 32768 KB, and allow the volumes to be built on a slice, type:
$ nas_pool -create -name marketing -description "storage pool for marketing"
-default_slice_flag y -size 20G -template clar_r5_performance -num_stripe_members
4 -stripe_size 32768
Create file systems with AVM 77
Configuring
Output
id = 5
name = marketing
description = storage pool for marketing
acl = 0
in_use = False
clients =
members = v213
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = CLSTD
server_visibility = server_2,server_3
template_pool = clar_r5_performance
num_stripe_members = 4
stripe_size = 32768
Create the file system
To create a file system, youmust first create a user-definedstorage pool. Create a user-defined
storage pool by volumes on page 76 and Create a user-defined storage pool by size on page
76 provide more information.
Use this procedure to create a file system by specifying a user-defined storage pool and an
associated storage system:
1. List the storage system by typing:
$ nas_storage -list
Output:
id acl name serial number
1 0 APM00033900125 APM00033900125
2. Get detailed information of a specific attached storage system in the list by using this
command syntax:
$ nas_storage -info <system_name>
where:
<system_name> = name of the storage system
Example:
To get detailed information of the storage system APM00033900125, type:
$ nas_storage -info APM00033900125
Output:
78 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
id = 1
arrayname = APM00033900125
name = APM00033900125
model_type = RACKMOUNT
model_num = 630
db_sync_time = 1073427660 == Sat Jan 6 17:21:00 EST 2007
num_disks = 30
num_devs = 21
num_pdevs = 1
num_storage_grps = 0
num_raid_grps = 10
cache_page_size = 8
wr_cache_mirror = True
low_watermark = 70
high_watermark = 90
unassigned_cache = 0
failed_over = False
captive_storage = True
Active Software
Navisphere = 6.6.0.1.43
ManagementServer = 6.6.0.1.43
Base = 02.06.630.4.001
Storage Processors
SP Identifier = A
signature = 926432
microcode_version = 2.06.630.4.001
serial_num = LKE00033500756
prom_rev = 3.00.00
agent_rev = 6.6.0 (1.43)
phys_memory = 3968
sys_buffer = 749
read_cache = 32
write_cache = 3072
free_memory = 115
raid3_mem_size = 0
failed_over = False
hidden = True
network_name = spa
ip_address = 128.221.252.200
subnet_mask = 255.255.255.0
gateway_address = 128.221.252.100
num_disk_volumes = 11 - root_disk root_ldisk d3 d4 d5 d6 d8
d13 d14 d15 d16
SP Identifier = B
signature = 926493
microcode_version = 2.06.630.4.001
serial_num = LKE00033500508
prom_rev = 3.00.00
agent_rev = 6.6.0 (1.43)
phys_memory = 3968
raid3_mem_size = 0
failed_over = False
hidden = True
network_name = OEM-XOO25IL9VL9
ip_address = 128.221.252.201
subnet_mask = 255.255.255.0
gateway_address = 128.221.252.100
num_disk_volumes = 4 - disk7 d9 d11 d12
Create file systems with AVM 79
Configuring
Note: This is not a complete output.
3. Create the file systemfroma user-defined storage pool and designate the storage system
on which you want the file system to reside by using this command syntax:
$ nas_fs -name <fs_name> -type <type> -create <volume_name> pool=<pool>
storage=<system_name>
where:
<fs_name> = name of the file system
<type> = type of file system, such as uxfs (default), mgfs, or rawfs
<volume_name> = name of the volume
<pool> = name of the storage pool
<system_name> = name of the storage system on which the file system resides
Example:
To create the file system ufs1 from a user-defined storage pool and designate the
APM00033900125 storage system on which you want the file system ufs1 to reside, type:
$ nas_fs -name ufs1 -type uxfs -create MTV1 pool=marketing storage=APM00033900125
Output:
id = 2
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = MTV1
pool = marketing
member_of = root_avm_fs_group_2
rw_servers =
ro_servers =
rw_vdms =
ro_vdms =
auto_ext = no,thin=no
deduplication= off
stor_devs = APM00033900125-0111
disks = d6,d8,d11,d12
80 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Create file systems with automatic file system extension
Use the -auto_extendoption of the nas_fs commandto enable automatic file systemextension
on a new file system created with AVM. The option is disabled by default.
Note: Automatic file system extension does not alleviate the need for appropriate planning. Create
the file systems with adequate space to accommodate the estimated usage. Allocating too little space
to accommodate normal file system usage makes the Control Station rapidly and repeatedly attempt
to extendthe file system. If the Control Stationcannot adequately extendthe file systemto accommodate
the usage quickly enough, the automatic extension fails.
If automatic file system extension is disabled and the file system reaches 90 percent full, a
warning message is written to the sys_log. Any action necessary is at the administrators
discretion.
Note: You do not need to set the maximum size for a newly created file system when you enable
automatic extension. The default maximum size is 16 TB. With automatic extension enabled, even if
the HWM is not set, the file system automatically extends up to 16 TB, if the storage space is available
in the storage pool.
Use this procedure to create a file system by specifying a system-defined storage pool and
a storage system, and enable automatic file system extension.
Action
To create a file system with automatic file system extension enabled, use this command syntax:
$ nas_fs -name <fs_name> -type <type> -create size=<size> pool=<pool>
storage=<system_name> -auto_extend {no|yes}
where:
<fs_name> = name of the file system.
<type> = type of file system.
<size> = amount of space to add to the file system. Specify the size in GB by typing <number>G (for example, 250G),
in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool from which to allocate space to the file system.
<system_name> = name of the storage system associated with the storage pool.
Example:
To enable automatic file system extension on a new 10 GB file system created by specifying a system-defined storage
pool and a VNX for block storage system, type:
$ nas_fs -name ufs1 -type uxfs -create size=10G pool=clar_r5_performance
storage=APM00042000814 -auto_extend yes
Create file systems with AVM 81
Configuring
Output
id = 434
name = ufs1
acl = 0
in_use = False
type = uxfs
worm = off
volume = v1634
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers =
ro_servers =
rw_vdms =
ro_vdms =
auto_ext = hwm=90%,thin=no
deduplication= off
stor_devs = APM00042000814-001D,APM00042000814-001A,
APM00042000814-0019,APM00042000814-0016
disks = d20,d12,d18,d10
Create file systems with the automatic file system extension option
enabled
When you create a file system with automatic extension enabled, you can set the point at
which the file system automatically extends (the HWM) and the maximum size to which it
can grow. You can also enable thin provisioning at the same time that you create or extend
a file system. Enable automatic file system extension and options on page 91 provides
information on modifying the automatic file system extension options.
If you set the slice=no option on the file system, the actual file system size might become
bigger than the size specified for the file system, which would exceed the maximum size.
In this case, you receive a warning, and the automatic extension fails. If you do not specify
the file systemslice option (-option slice=yes|no) when you create the file system, it defaults
to the setting of the storage pool. Modify system-defined and user-defined storage pool
attributes on page 109 provides more information.
Note: If the actual file system size is above the HWM when thin provisioning is enabled, the client
sees the actual file system size instead of the specified maximum size.
Enabling automatic file system extension and thin provisioning options does not
automatically reserve the space from the storage pool for that file system. So that the
automatic extension can succeed, administrators must ensure that adequate storage space
exists. If the available storage is less than the maximum size setting, automatic extension
fails. Users receive an error message when the file system becomes full, even though it
appears that there is free space in the file system. The file systemmust be manually extended.
Use this procedure to simultaneously set the automatic file system extension options when
you are creating the file system:
82 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
1. Create a file system of a specified size, enable automatic file system extension and thin
provisioning, and set the HWM and the maximum file system size simultaneously by
using this command syntax:
$ nas_fs -name <fs_name> -type <type> -create size=<integer>[T|G|M]
pool=<pool> storage=<system_name> -auto_extend {no|yes} -thin {yes|no}
-hwm <50-99>% -max_size <integer>[T|G|M]
where:
<fs_name> = name of the file system.
<type> = type of file system.
<integer> = size requested in MB, GB, or TB. The maximum size is 16 TB.
<pool> = name of the storage pool.
<system_name> = attachedstorage systemon which the file systemand storage pool reside.
<50-99> = percentage between 50 and 99, at which you want the file system to
automatically extend.
Example:
To create a 10 MB file system of type UxFS from an AVM storage pool, with automatic
extension enabled, and a maximum file system size of 200 MB, HWM of 90 percent, and
thin provisioning enabled, type:
$ nas_fs -name ufs2 -type uxfs -create size=10M pool=clar_r5_performance
-auto_extend yes -thin yes -hwm 90% -max_size 200M
Output:
id = 27
name = ufs2
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=90%,max_size=200M,thin=yes
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
Note: When you enable thin provisioning on a new or existing file system, you must also specify
the maximum size to which the file system can automatically extend.
Create file systems with AVM 83
Configuring
2. Verify the settings for the specific file systemafter enabling automatic extension by using
this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To verify the settings for file system ufs2 after enabling automatic extension, type:
$ nas_fs -info ufs2
Output:
id = 27
name = ufs2
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers = server_2
ro_servers =
rw_vdms =
ro_vdms =
backups = ufs2_snap1,ufs2_snap2
auto_ext = hwm=90%,max_size=200M,thin=yes
deduplication= off
thin_storage = True
tiering_policy= Auto-tier
compressed = False
mirrored = False
ckpts =
stor_devs = APM00042000814-001D,APM00042000814-001A,
APM00042000814-0019,APM00042000814-0016
disks = d20,d12,d18,d10
Youcanalso set the options -hwmand-max_size oneachfile systemwithautomatic extension
enabled. When enabling thin provisioning on a file system, you must set the maximumsize,
but setting the high water mark is optional.
Extend file systems with AVM
Increase the size of a file systemnearing its maximumcapacity by extending the file system.
You can:

Extend the size of a file system to add space if it has an associated system-defined or
user-definedstorage pool. You can also specify the storage systemfromwhich to allocate
space. Extend file systems by using storage pools on page 85 provides instructions.
84 Managing Volumes and File Systems on VNX AVM 7.0
Configuring

Extend the size of a file system by adding volumes if the file system has an associated
system-defined or user-defined storage pool. Extend file systems by adding volumes to
a storage pool on page 87 provides instructions.

Extend the size of a file system by using a storage pool other than the one used to create
the file system. Extend file systems by using a different storage pool on page 89 provides
instructions.

Extend an existing file systemby enabling automatic extension on that file system. Enable
automatic file system extension and options on page 91 provides instructions.

Extend an existing file system by enabling thin provisioning on that file system. Enable
thin provisioning on page 96 provides instructions.
Managing Volumes and File Systems on VNX Manually contains the instructions to extend file
systems manually.
Extend file systems by using storage pools
All file systems created by using the AVM feature have an associated storage pool.
Extend a file system created with either a system-defined storage pool or a user-defined
storage pool by specifying the size and the name of the file system. AVM allocates storage
from the storage pool to the file system. You can also specify the storage system you want
to use. If you do not specify, the last storage system associated with the storage pool is used.
Note: A file system created by using a mapped storage pool can be extended on its existing pool or by
using a compatible mapped storage pool that contains the same disk type.
Use this procedure to extend a file system by size:
1. Check the file system configuration to confirm that the file system has an associated
storage pool by using this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Note: If you see a storage pool defined in the output, the file system was created with AVM and
has an associated storage pool.
Example:
To check the file system configuration to confirm that file system ufs1 has an associated
storage pool, type:
$ nas_fs -info ufs1
Output:
Extend file systems with AVM 85
Configuring
id = 27
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = FP1
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
2. Extend the size of the file system by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool> storage=<system_name>
where:
<fs_name> = name of the file system.
<size> = amount of space to add to the file system. Specify the size in GB by typing
<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or
in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool.
<system_name> = name of the storage system. If you do not specify a storage system, the
default storage systemis the one on which the file systemresides. If the file systemspans
multiple storage systems, the default is any one of the storage systems on which the file
system resides.
Note: The first time you extend the file systemwithout specifying a storage pool, the default storage
pool is the one used to create the file system. If you specify a storage pool that is different from
the one used to create the file system, the next time you extend this file system without specifying
a storage pool, the last pool in the output list is the default.
Example:
To extend the size of file system ufs1 by 10 MB, type:
$ nas_fs -xtend ufs1 size=10M pool=clar_r5_performance storage=APM00023700165
Output:
86 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
id = 8
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs = APM00023700165-0111
disks = d7,d13,d19,d25,d30,d31,d32,d33
3. Check the size of the file system after extending it to confirm that the size increased by
using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of file system ufs1 after extending it to confirm that the size increased,
type:
$ nas_fs -size ufs1
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Extend file systems by adding volumes to a storage pool
You can extend a file system manually by specifying the volumes to add.
Note: With user-defined storage pools, you can manually create the underlying volumes, including
striping, before adding the volume to the storage pool. Managing Volumes and File Systems on VNX
Manually describes the procedures needed to perform these tasks before creating or extending the file
system.
If you do not specify a storage system when extending the file system, the default storage
system is the one on which the file system resides. If the file system spans multiple storage
systems, the default is any one of the storage systems on which the file system resides.
Use this procedure to extend the file system by adding volumes to the same user-defined
storage pool that was used to create the file system:
1. Check the configuration of the file system to confirm the associated user-defined storage
pool by using this command syntax:
Extend file systems with AVM 87
Configuring
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the configuration of file system ufs3 to confirm the associated user-defined
storage pool, type:
$ nas_fs -info ufs3
Output:
id = 27
name = ufs3
acl = 0
in_use = True
type = uxfs
worm = off
volume = v104
pool = marketing
member_of =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
Note: The user-defined storage pool used to create the file system is defined in the output as
pool=marketing.
2. Add volumes to extend the size of a file system by using this command syntax:
$ nas_fs -xtend <fs_name> <volume_name> pool=<pool> storage=<system_name>
where:
<fs_name> = name of the file system.
<volume_name> = name of the volume to add to the file system.
<pool> = storage pool associated with the file system. It can be user-defined or
system-defined.
<system_name> = name of the storage system on which the file system resides.
Example:
To extend file system ufs3, type:
$ nas_fs -xtend ufs3 v121 pool=marketing storage=APM00023700165
Output:
88 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
id = 10
name = ufs3
acl = 0
in_use = False
type = uxfs
volume = v121
pool = marketing
member_of =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs = APM00023700165-0111
disks = d7,d8,d13,d14
Note: The next time you extend this file system without specifying a storage pool, the last pool in
the output list is the default.
3. Check the size of the file system after extending it to confirm that the size increased by
using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of file system ufs3 after extending it to confirm that the size increased,
type:
$ nas_fs -size ufs3
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Extend file systems by using a different storage pool
You can use more than one storage pool to extend a file system. Ensure that the storage
pools have space allocated from the same storage system to prevent the file system from
spanning more than one storage system.
Note: A file system created by using a mapped storage pool can be extended on its existing pool or by
using a compatible mapped storage pool that contains the same disk type.
Use this procedure to extend the file system by using a storage pool other than the one used
to create the file system:
1. Check the file system configuration to confirm that it has an associated storage pool by
using this command syntax:
Extend file systems with AVM 89
Configuring
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the file system configuration to confirm that file system ufs2 has an associated
storage pool, type:
$ nas_fs -info ufs2
Output:
id = 9
name = ufs2
acl = 0
in_use = True
type = uxfs
worm = off
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
deduplication = Off
thin_storage = True
tiering_policy = Auto-tier
compressed = False
mirrored = False
ckpts =
Note: The storage pool used earlier to create or extend the file system is shown in the output as
associated with this file system.
2. Optionally, extend the file system by using a storage pool other than the one used to
create the file system by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool>
where:
<fs_name> = name of the file system.
<size> = amount of space to add to the file system. Specify the size in GB by typing
<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or
in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool.
Example:
To extend file system ufs2 by using a storage pool other than the one used to create the
file system, type:
$ nas_fs -xtend ufs2 size=10M pool=clar_r5_economy
90 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Output:
id = 9
name = ufs2
acl = 0
in_use = False
type = uxfs
volume = v123
pool = clar_r5_performance,clar_r5_economy
member_of = root_avm_fs_group_3,root_avm_fs_group_4
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs = APM00033900165-0112
disks = d7,d13,d19,d25
Note: The storage pools used to create and extend the file system now appear in the output. There
is only one storage system from which space for these storage pools is allocated.
3. Check the file system size after extending it to confirm the increase in size by using this
command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of file system ufs2 after extending it to confirm the increase in size,
type:
$ nas_fs -size ufs2
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Enable automatic file system extension and options
You can automatically extend an existing file system created with AVM system-defined or
user-defined storage pools. The file system automatically extends by using space from the
storage system and storage pool with which the file system is associated.
If you set the slice=no option on the file system, the actual file system size might become
bigger than the size specified for the file system, which would exceed the maximum size.
In this case, you receive a warning, and the automatic extension fails. If you do not specify
the file systemslice option (-option slice=yes|no) when you create the file system, it defaults
to the setting of the storage pool.
Modify system-defined and user-defined storage pool attributes on page 109 describes the
procedure to modify the default_slice_flag attribute on the storage pool.
Extend file systems with AVM 91
Configuring
Use the -modify option to enable automatic extension on an existing file system. You can
also set the HWM and maximum size.
To enable automatic file system extension and options:

Enable automatic file system extension on page 92

Set the HWM on page 94

Set the maximum file system size on page 95


You can also enable thin provisioning at the same time that you create or extend a file system.
Enable thin provisioning on page 96 describes the procedure to enable thin provisioning
on an existing file system.
Enable automatic extension, thin provisioning, and all options simultaneously on page 98
describes the procedure to simultaneously enable automatic extension, thin provisioning,
and all options on an existing file system.
Enable automatic file system extension
If the HWM or maximum size is not set, and if there is space available, the file system
automatically extends up to the default maximumsize of 16 TB when the file systemreaches
the default HWM of 90 percent.
An error message appears if you try to enable automatic extension on a file system that was
created manually.
Note: The HWM is 90 percent by default when you enable automatic file system extension.
Action
To enable automatic extension on an existing file system, use this command syntax:
$ nas_fs -modify <fs_name> -auto_extend {no|yes}
where:
<fs_name> = name of the file system
Example:
To enable automatic extension on the existing file system ufs3, type:
$ nas_fs -modify ufs3 -auto_extend yes
92 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Output
id = 28
name = ufs3
acl = 0
in_use = True
type = uxfs
worm = off
volume = v157
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=90%,thin=no
stor_devs = APM00042000818-001F,APM00042000818-001D
APM00042000818-0019,APM00042000818-0016
disks = d20,d18,d14,d11
disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
Extend file systems with AVM 93
Configuring
Set the HWM
With automatic file systemextension enabled on an existing file system, use the -hwmoption
to set a threshold. To specify a threshold, type an integer between 50 and 99 percent. The
default is 90 percent.
If the HWM or maximum size is not set, the file system automatically extends up to the
default maximumsize of 16 TB when the file systemreaches the default HWMof 90 percent,
if the space is available. The value for the maximum size, if specified, has an upper limit of
16 TB.
Action
To set the HWM on an existing file system, with automatic file system extension enabled, use this command syntax:
$ nas_fs modify <fs_name> -hwm <50-99>%
where:
<fs_name> = name of the file system
<50-99> = an integer representing the file system usage point at which you want it to automatically extend
Example:
To set the HWM to 85 percent on the existing file system ufs3, with automatic extension already enabled, type:
$ nas_fs -modify ufs3 -hwm 85%
Output
id = 28
name = ufs3
acl = 0
in_use = True
type = uxfs
worm = off
volume = v157
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=85%,thin=no
stor_devs = APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
disks = d20,d18,d14,d11
disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
94 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Set the maximum file system size
Use the -max_size option to specify a maximum size to which a file system can grow. To
specify the maximum size, type an integer and specify T for TB, G for GB (default), or M for
MB.
To convert gigabytes to megabytes, multiply the number of gigabytes by 1024. To convert
terabytes to gigabytes, multiply the number of terabytes by 1024. For example, to convert
450 gigabytes to megabytes, 450 x 1024 = 460800 MB.
When you enable automatic file system extension, the file system automatically extends up
to the default maximum size of 16 TB. Set the HWM at which you want the file system to
automatically extend. If the HWM is not set, the file system automatically extends up to 16
TB when the file system reaches the default HWM of 90 percent, if the space is available.
Action
To set the maximum file system size with automatic file system extension already enabled, use this command syntax:
$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M]
where:
<fs_name> = name of the file system
<integer> = maximum size requested in MB, GB, or TB
Example:
To set the maximum file system size on the existing file system, type:
$ nas_fs -modify ufs3 -max_size 16T
Extend file systems with AVM 95
Configuring
Output
id = 28
name = ufs3
acl = 0
in_use = True
type = uxfs
worm = off
volume = v157
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=85%,max_size=16769024M,thin=no
stor_devs = APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
disks = d20,d18,d14,d11
disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
Enable thin provisioning
You can also enable thin provisioning at the same time that you create or extend a file system.
Use the -thin option to enable thin provisioning. You must also specify the maximum size
to which the file system should automatically extend. An error message appears if you
attempt to enable thin provisioning and do not set the maximum size. Set the maximum file
system size on page 95 describes the procedure to set the maximum file system size.
The upper limit for the maximum size is 16 TB. The maximum size you set is the file system
size that is presented to users, if the maximum size is larger than the actual file system size.
Note: Enabling automatic file system extension and thin provisioning options does not automatically
reserve the space fromthe storage pool for that file system. Administrators must ensure that adequate
storage space exists, so that the automatic extension operation can succeed. If the available storage is
less than the maximum size setting, automatic extension fails. Users receive an error message when
the file system becomes full, even though it appears that there is free space in the file system. The file
system must be manually extended.
Enable thin provisioning on the source file system when the feature is used in a replication
situation. With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size
of the Replicator destination file system, and the clients see the virtually provisioned
maximum size of the Replicator source file system. Interoperability considerations on page
59 provides additional information.
96 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Action
To enable thin provisioning with automatic extension enabled on the file system, use this command syntax:
$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M] -thin {yes|no}
where:
<fs_name> = name of the file system
<integer> = size requested in MB, GB, or TB
Example:
To enable thin provisioning, type:
$ nas_fs -modify ufs1 -max_size 16T -thin yes
Output
id = 27
name = ufs3
acl = 0
in_use = True
type = uxfs
worm = off
volume = v157
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=85%,max_size=16769024M,thin=yes
stor_devs = APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
disks = d20,d18,d14,d11
disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
Extend file systems with AVM 97
Configuring
Enable automatic extension, thin provisioning, and all options
simultaneously
Note: An error message appears if you try to enable automatic file system extension on a file system
that was created without using a storage pool.
Action
To simultaneously enable automatic file system extension and thin provisioning on an existing file system, and to set the
HWM and the maximum size, use this command syntax:
$ nas_fs -modify <fs_name> -auto_extend {no|yes} -thin {yes|no}
-hwm <50-99>% -max_size <integer>[T|G|M]
where:
<fs_name> = name of the file system
<50-99> = an integer that represents the file system usage point at which you want it to automatically extend
<integer> = size requested in MB, GB, or TB
Example:
To modify a UxFS to enable automatic extension, enable thin provisioning, set a maximum file system size of 16 TB with
an HWM of 90 percent, type:
$ nas_fs -modify ufs4 -auto_extend yes -thin yes -hwm 90% -max_size 16T
Output
id = 29
name = ufs4
acl = 0
in_use = False
type = uxfs
worm = off
volume = v157
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=90%,max_size=16769024M,thin=yes
stor_devs = APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
disks = d20,d18,d14,d11
98 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
Verify the maximum size of the file system
Automatic file system extension fails when the file system reaches the maximum size.
Action
To force an extension to determine whether the maximum size has been reached, use this command syntax:
$ nas_fs -xtend <fs_name> size=<size>
where:
<fs_name> = name of the file system
<size> = size to extend the file system by, in GB, MB, or TB
Example:
To force an extension to determine whether the maximum size has been reached, type:
$ nas_fs -xtend ufs1 size=4M
Output
id = 759
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = v2459
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_4
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=90%,max_size=16769024M (reached)
thin=yes
<<<
stor_devs = APM00041700549-0018
disks = d10
disk=d10 stor_dev=APM00041700549-0018 addr=c16t1l8 server=server_4
disk=d10 stor_dev=APM00041700549-0018 addr=c32t1l8 server=server_4
disk=d10 stor_dev=APM00041700549-0018 addr=c0t1l8 server=server_4
disk=d10 stor_dev=APM00041700549-0018 addr=c48t1l8 server=server_4
Extend file systems with AVM 99
Configuring
Create file system checkpoints with AVM
Use either AVM system-defined or user-defined storage pools to create file system
checkpoints. Specify the storage system where the file system checkpoint should reside.
Use this procedure to create the checkpoint by specifying a storage pool and storage system:
Note: You can specify the storage pool for the checkpoint SavVol only when there are no existing
checkpoints of the PFS.
1. Obtain the list of available storage systems by typing:
$ nas_storage -list
Note: To obtain more detailed information on the storage system and associated names, use the
-info option instead.
2. Create the checkpoint by using this command syntax:
$ fs_ckpt <fs_name> -name <name> -Create [size=<integer>[T|G|M|%]] pool=<pool>
storage=<system_name>
where:
<fs_name> = name of the file system for which you want to create a checkpoint.
<name> = name of the checkpoint.
<integer> = amount of space to allocate to the checkpoint. Type the size in TB, GB, or
MB.
<pool> = name of the storage pool.
<system_name> = storage system on which the file system checkpoint resides.
Note: Thin provisioning is not supported with checkpoints. NFS, CIFS, and FTP clients cannot see
the virtually provisioned maximum size of a SnapSure checkpoint file system.
Example:
To create the checkpoint ckpt1, type:
$ fs_ckpt ufs1 -name ckpt1 -Create size=10G pool=clar_r5_performance
storage=APM00023700165
Output:
100 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
id = 1
name = ckpt1
acl = 0
in_use = False
type = uxfs
volume = V126
pool = clar_r5_performance
member_of =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs = APM00023700165-0111
disks = d7,d8
Create file system checkpoints with AVM 101
Configuring
102 Managing Volumes and File Systems on VNX AVM 7.0
Configuring
4
Managing
The tasks to manage AVM storage pools are:

List existing storage pools on page 104

Display storage pool details on page 105

Display storage pool size information on page 106

Modify system-defined and user-defined storage pool attributes on


page 109

Extend a user-defined storage pool by volume on page 118

Extend a user-defined storage pool by size on page 119

Extend a system-defined storage pool on page 120

Remove volumes from storage pools on page 122

Delete user-defined storage pools on page 123


Managing Volumes and File Systems on VNX AVM 7.0 103
List existing storage pools
When the existing storage pools are listed, all system-definedstorage pools and user-defined
storage pools appear in the output, regardless of whether they are in use.
Action
To list all existing system-defined and user-defined storage pools, type:
$ nas_pool -list
Output
id in_use acl name storage_system
3 n 0 clar_r5_performance FCNTR074200038
40 y 0 TP1 FCNTR074200038
41 y 0 FP1 FCNTR074200038
104 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Display storage pool details
Action
To display detailed information for a storage pool, use this command syntax:
$ nas_pool -info <name>
where:
<name> = name of the storage pool
Example:
To display detailed information for the storage pool FP1, type:
$ nas_pool -info FP1
Output
id = 40
name = FP1
description = Mapped Pool on FCNTR074200038
acl = 0
in_use = False
clients =
members =
default_slice_flag = True
is_user_defined = False
thin = Mixed
tiering_policy = Auto-tier
compressed = False
mirrored = False
disk_type = Mixed
volume_profile = FP1_vp
is_dynamic = True
is_greedy = N/A
Display storage pool details 105
Managing
Display storage pool size information
Information about the size of the storage pool appears in the output. If there is more than
one storage pool, the output shows the size information for all the storage pools.
The size information includes:

The total used space in the storage pool in megabytes (used_mb).

The total unused space in the storage pool in megabytes (avail_mb).

The total used and unused space in the storage pool in megabytes (total_mb).

The total space available fromall sources in megabytes that could be added to the storage
pool (potential_mb). For user-defined storage pools, the output for potential_mb is 0
because they must be manually extended and shrunk.
Note: If either nonMB-aligned disk volumes or disk volumes of different sizes are striped together,
truncation of storage might occur. The total amount of space added to a pool might be different than
the total amount taken from potential storage. Total space in the pool includes the truncated space,
but potential storage does not include the truncated space.
In the Unisphere for File software, the potential megabytes in the output represents the total
available storage, including the storage pool. In the VNX for file CLI, the output for
potential_mb does not include the space in the storage pool.
Note: Use the -size -all option to display the size information for all storage pools.
Action
To display the size information for a specific storage pool, use this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size information for the clar_r5_performance storage pool, type:
$ nas_pool -size clar_r5_performance
Output
id = 3
name = clar_r5_performance
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
106 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Action
To display the size information for a specific mapped storage pool, use this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size information for the Pool0 storage pool, type:
$ nas_pool -size Pool0
Output
id = 43
name = Pool0
used_mb = 0
avail_mb = 0
total_mb = 0
potential_mb = 3691
Physical storage usage in Pool Pool0 on APM00101902363
used_mb = 16385
avail_mb = 1632355
total_mb = 1648740
Display storage pool size information 107
Managing
Display size information for Symmetrix storage pools
Use the -size -all option to display the size information for all storage pools.
Action
To display the size information of Symmetrix storage pools, use this command syntax:
$ nas_pool -size <name> -slice y
where:
<name> = name of the storage pool
Example:
To request size information for the Symmetrix symm_std storage pool, type:
$ nas_pool -size symm_std -slice y
Output
id = 5
name = symm_std
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
Note

Use the -slice y option to include any space from sliced volumes in the available result. However, if the default_slice_flag
value is set to no, then sliced volumes do not appear in the output.

The size information for the system-defined storage pool named clar_r5_performance appears in the output. If you
have more storage pools, the output shows the size information for all the storage pools.

used_mb is the used space in the specified storage pool in megabytes.

avail_mb is the amount of unused available space in the storage pool in megabytes.

total_mb is the total of used and unused space in the storage pool in megabytes.

potential_mb is the potential amount of storage that can be added to the storage pool available from all sources in
megabytes. For user-defined storage pools, the output for potential_mb is 0 because they must be manually extended
and shrunk. In this example, total_mb and potential_mb are the same because the total storage in the storage pool
is equal to the total potential storage available.

If either nonmegabyte-aligned disk volumes or disk volumes of different sizes are striped together, truncation of
storage might occur. The total amount of space added to a pool might be different than the total amount taken from
potential storage. Total space in the pool includes the truncated space, but potential storage does not include the
truncated space.
108 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Modify system-defined and user-defined storage pool attributes
System-definedanduser-definedstorage pools have attributes that control howthey manage
the volumes and file systems. Table 7 on page 36 lists the modifiable storage pool attributes,
and their values and descriptions.
You can change the attribute default_slice_flag for system-defined and user-defined storage
pools. The flag indicates whether member volumes can be sliced. If the storage pool has
member volumes built on one or more slices, you cannot set this value to n.
Action
To modify the default_slice_flag for a system-defined or user-defined storage pool, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -default_slice_flag {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To modify a storage pool named marketing and change the default_slice_flag to prevent members of the pool from being
sliced when space is dispensed, type:
$ nas_pool -modify marketing -default_slice_flag n
Output
id = 5
name = marketing
description = storage pool for marketing
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag= False
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members= N/A
stripe_size = N/A
Note

When the default_slice_flag is set to y, it appears as True in the output.

If using automatic file system extension, the default_slice_flag should be set to n.


Modify system-defined and user-defined storage pool attributes 109
Managing
Modify system-defined storage pool attributes
The system-defined storage pools attributes that can be modified are:

-is_dynamic: Indicates whether the system-defined storage pool is allowed to


automatically add or remove member volumes.

-is_greedy: If this is set to y (greedy), the system-defined storage pool attempts to create
new member volumes before using space from existing member volumes. If this is set
to n (not greedy), the system-defined storage pool consumes all the existing space in the
storage pool before trying to add additional member volumes.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough
free space on the existing volumes that the file system is using. Table 7 on page 36 describes the
is_greedy behavior.
The tasks to modify the attributes of a system-defined storage pool are:

Modify the -is_greedy attribute of a system-defined storage pool on page 111

Modify the -is_dynamic attribute of a system-defined storage pool on page 112


110 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Modify the -is_greedy attribute of a system-defined storage pool
Action
To modify the -is_greedy attribute of a specific system-defined storage pool to allow the storage pool to use new volumes
rather than existing volumes, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -is_greedy {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To change the attribute -is_greedy to false, for the storage pool named clar_r5_performance, type:
$ nas_pool -modify clar_r5_performance -is_greedy n
Output
id = 3
name = clar_r5_performance
description = CLARiiON RAID5 4plus1
acl = 0
in_use = False
clients =
members =
default_slice_flag = True
is_user_defined = False
thin = False
volume_profile = clar_r5_performance_vp
is_dynamic = True
is_greedy = False
num_stripe_members = 4
stripe_size = 32768
Note
The n entered in the example delivers a False answer to the is_greedy attribute in the output.
Modify system-defined and user-defined storage pool attributes 111
Managing
Modify the -is_dynamic attribute of a system-defined storage pool
Action
To modify the -is_dynamic attribute of a specific system-defined storage pool to not allow the storage pool to add or remove
new members, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -is_dynamic {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To change the attribute -is_dynamic to false to not allow the storage pool to add or remove new members, for the storage
pool named clar_r5_performance, type:
$ nas_pool -modify clar_r5_performance -is_dynamic n
Output
id = 3
name = clar_r5_performance
description = CLARiiON RAID5 4plus1
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag = True
is_user_defined = False
thin = False
volume_profile = clar_r5_performance_vp
is_dynamic = False
is_greedy = False
num_stripe_members = 4
stripe_size = 32768
Note
The n entered in the example delivers a False answer to the is_dynamic attribute in the output.
112 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Modify user-defined storage pool attributes
The user-defined storage pools attributes that can be modified are:

-name: Changes the name of the specified user-defined storage pool to the new name.

-acl: Designates an access control level for a user-defined storage pool. The default value
is 0.

-description: Changes the description comment for the user-defined storage pool.

-is_greedy: Identifies which member volumes of a user-defined storage pool are used to
provide space when creating or extending a file system.
The tasks to modify the attributes of a user-defined storage pool are:

Modify the name of a user-defined storage pool on page 114

Modify the access control of a user-defined storage pool on page 115

Modify the description of a user-defined storage pool on page 116

Modify the -is_greedy attribute of a user-defined storage pool on page 117


Modify system-defined and user-defined storage pool attributes 113
Managing
Modify the name of a user-defined storage pool
Action
To modify the name of a specific user-defined storage pool, use this command syntax:
$ nas_pool -modify <name> -name <new_name>
where:
<name> = old name of the storage pool
<new_name> = new name of the storage pool
Example:
To change the name of the storage pool named marketing to purchasing, type:
$ nas_pool -modify marketing -name purchasing
Output
id = 5
name = purchasing
description = storage pool for marketing
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Note
The name change to purchasing appears in the output. The description does not change unless the administrator changes
it.
114 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Modify the access control of a user-defined storage pool
Controlling Access to System Objects on VNX contains instructions to manage access control
levels.
Note: The access control level change to 1 appears in the output. The description does not change
unless the administrator modifies it.
Action
To modify the access control level for a specific user-defined storage pool, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -acl <acl>
where:
<name> = name of the storage pool.
<id> = ID of the storage pool.
<acl> = designates an access control level for the new storage pool. The default value is 0.
Example:
To change the access control level for the storage pool named purchasing, type:
$ nas_pool -modify purchasing -acl 1000
Output
id = 5
name = purchasing
description = storage pool for marketing
acl = 1000
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Modify system-defined and user-defined storage pool attributes 115
Managing
Modify the description of a user-defined storage pool
Action
To modify the description of a specific user-defined storage pool, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -description <description>
where:
<name> = name of the storage pool.
<id> = ID of the storage pool.
<description> = descriptive comment about the pool or its purpose. Type the comment within quotes.
Example:
To change the descriptive comment for the storage pool named purchasing, type:
$ nas_pool -modify purchasing -description "storage pool for purchasing"
Output
id = 15
name = purchasing
description = storage pool for purchasing
acl = 1000
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
116 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Modify the -is_greedy attribute of a user-defined storage pool
Action
To modify the -is_greedy attribute of a specific user-defined storage pool, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -is_greedy {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To change the -is_greedy attribute for the user-defined storage pool named user_pool, type:
$ nas_pool -modify user_pool -is_greedy y
Output
id = 58
name = user_pool
description =
acl = 0
in_use = False
clients =
members = d21,d22,d23,d24
default_slice_flag = True
is_user_defined = True
virtually_provisioned= False
disk_type = CLSTD
server_visibility = server_2
is_greedy = True
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Modify system-defined and user-defined storage pool attributes 117
Managing
Extend a user-defined storage pool by volume
You can add a slice volume, a metavolume, a disk volume, or a stripe volume to a
user-defined storage pool.
Action
To extend an existing user-defined storage pool by volumes, use this command syntax:
$ nas_pool -xtend {<name>|id=<id>} [-storage <system_name>] -volumes [<vol
ume_name>,...]
where:
<name> = name of the storage pool
<id> = ID of the storage pool
<system_name> = name of the storage system, used to differentiate pools when the same pool name is used in multiple
storage systems
<volume_name> = names of the volumes separated by commas
Example:
To extend the volumes for the storage pool named engineering, with volumes d130, d131, d132, and d133, type:
$ nas_pool -xtend engineering -volumes d130,d131,d132,d133
Output
id = 6
name = engineering
description =
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129,d130,d131,d132,d133
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Note
The original volumes (d126, d127, d128, and d129) appear in the output, followed by the volumes added in the example.
118 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Extend a user-defined storage pool by size
Action
To extend the volumes for an existing user-defined storage pool by size, use this command syntax:
$ nas_pool -xtend {<name>|id=<id>} -size <integer> [M|G|T]
[-storage <system_name>]
where:
<name> = name of the storage pool
<id> = ID of the storage pool
<system_name> = storage system on which one or more volumes will be created, to be added to the storage pool
Example:
To extend the volumes for the storage pool named engineering, by a size of 1 GB, type:
$ nas_pool -xtend engineering -size 1G
Output
id = 6
name = engineering
description =
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129,d130,d131,d132,d133
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Extend a user-defined storage pool by size 119
Managing
Extend a system-defined storage pool
You can specify a size by which AVM extends a system-defined pool and turns off the
dynamic behavior of the systempool to prevent it fromconsuming additional disk volumes.
Doing so:

Uses the disk selection algorithms that AVM uses to create system-defined storage pool
members.

Prevents system-defined storage pools from rapidly consuming a large number of disk
volumes.
You can specify the storage system from which to allocate space to the pool. The dynamic
behavior of the system-defined storage pool must be turned off by using the nas_pool
-modify command before extending the pool.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough free
space on the existing volumes that the file system is using. Table 7 on page 36 describes the is_greedy
behavior.
On successful completion, the system-defined storage pool extends by at least the specified
size. The storage pool might extend more than the requested size. The behavior is the same
as when the storage pool is extended during a file-system creation.
If a storage system is not specified and the pool has members from a single storage system,
then the default is the existing storage system. If a storage system is not specified and the
pool has members from multiple storage systems, the existing set of storage systems is used
to extend the storage pool.
If a storage system is specified, space is allocated from that system:

The specified pool must be a system-defined pool.

The specified pool must have the is_dynamic attribute set to n, or false. Modify
system-defined storage pool attributes on page 110 provides instructions to change the
attribute.

There must be enough disk volumes to satisfy the size requested.


120 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Extend a system-defined storage pool by size
Action
To extend a system-defined storage pool by size and specify a storage system from which to allocate space, use this
command syntax:
$ nas_pool -xtend {<name>|id=<id>} -size <integer> -storage <system_name>
where:
<name> = name of the system-defined storage pool.
<id> = ID of the storage pool.
<integer> = size requested in MB or GB. The default size unit is MB.
<system_name> = name of the storage system from which to allocate the storage.
Example:
To extend the system-defined clar_r5_performance storage pool by size and designate the storage system from which to
allocate space, type:
$ nas_pool -xtend clar_r5_performance -size 128M -storage APM00023700165-0011
Output
id = 3
name = clar_r5_performance
description = CLARiiON RAID5 4plus1
acl = 0
in_use = False
clients =
members = v216
default_slice_flag = False
is_user_defined = False
thin = False
disk_type = CLSTD
server_visibility = server_2,server_3
volume_profile = clar_r5_performance_vp
is_dynamic = False
is_greedy = False
num_stripe_members = 4
stripe_size = 32768
Extend a system-defined storage pool 121
Managing
Remove volumes from storage pools
Action
To remove volumes from a system-defined or user-defined storage pool, use this command syntax:
$ nas_pool -shrink {<name>|id=<id>} [-storage <system_name>]
-volumes [<volume_name>,...]
where:
<name> = name of the storage pool
<id> = ID of the storage pool
<system_name> = name of the storage system, used to differentiate pools when the same pool name is used in multiple
storage systems
<volume_name> = names of the volumes separated by commas
Example:
To remove volumes d130 and d133 from the storage pool named marketing, type:
$ nas_pool -shrink marketing -volumes d130,d133
Output
id = 5
name = marketing
description = storage pool for marketing
acl = 1000
in_use = False
clients =
members = d126,d127,d128,d129,d131,d132
default_slice_flag = True
is_user_defined = True
thin = False
disk_type = STD
server_visibility = server_2,server_3,server_4
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
122 Managing Volumes and File Systems on VNX AVM 7.0
Managing
Delete user-defined storage pools
You can delete only a user-defined storage pool that is not in use. You must remove all
storage pool member volumes before deleting a user-defined storage pool. This delete action
removes only the volumes in the specified storage pool and deletes the storage pool, not the
volumes. System-defined storage pools cannot be deleted.
Action
To delete a user-defined storage pool, use this command syntax:
$ nas_pool -delete <name>
where:
<name> = name of the storage pool
Example:
To delete the user-defined storage pool named sales, type:
$ nas_pool -delete sales
Output
id = 7
name = sales
description =
acl = 0
in_use = False
clients =
members =
default_slice_flag = True
is_user_defined = True
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
Delete user-defined storage pools 123
Managing
Delete a user-defined storage pool and its volumes
The -deep option deletes the storage pool and also recursively deletes each member of the
storage pool unless it is in use or is a disk volume.
Action
To delete a user-defined storage pool and the volumes in it, use this command syntax:
$ nas_pool -delete {<name>|id=<id>} [-deep]
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To delete the storage pool named sales, type:
$ nas_pool -delete sales -deep
Output
id = 7
name = sales
description =
acl = 0
in_use = False
clients =
members =
default_slice_flag = True
is_user_defined = True
thin = False
template_pool = N/A
num_stripe_members = N/A
stripe_size = N/A
124 Managing Volumes and File Systems on VNX AVM 7.0
Managing
5
Troubleshooting
As part of an effort to continuously improve andenhance the performance
andcapabilities of its product lines, EMCperiodically releases newversions
of its hardware and software. Therefore, some functions described in this
document may not be supported by all versions of the software or
hardware currently in use. For the most up-to-date information on product
features, refer to your product release notes.
If a product does not function properly or does not function as described
in this document, contact your EMC Customer Support Representative.
ProblemResolution Roadmap for VNXcontains additional information about
using the EMC Online Support website and resolving problems.
Topics included are:

AVM troubleshooting considerations on page 126

EMC E-Lab Interoperability Navigator on page 126

Known problems and limitations on page 126

Error messages on page 127

EMC Training and Professional Services on page 128


Managing Volumes and File Systems on VNX AVM 7.0 125
AVM troubleshooting considerations
Consider these steps when troubleshooting AVM:
Obtain all files and subdirectories in /nas/log/ and /nas/volume/ fromthe Control Station
before reporting problems, which helps to diagnose the problem faster. Additionally,
save any files in /nas/tasks when problems are seen fromthe Unisphere for File software.
The support material script collects information related to the Unisphere for File software
and APL.
Set the environment variable NAS_REPLICATE_DEBUG=1 to log additional information
in /nas/log/nas_log.al.tran.
EMC E-Lab Interoperability Navigator
The EMC E-Lab

Interoperability Navigator is a searchable, web-based application that


provides access to EMCinteroperability support matrices. It is available on the EMCOnline
Support website at http://Support.EMC.com. After logging in, locate the applicable Support
by Product page, find Tools, and click E-Lab Interoperability Navigator.
Known problems and limitations
Table 9 on page 126 describes known problems that might occur when using AVM and
automatic file system extension and presents workarounds.
Table 9. Known problems and workarounds
Workaround Symptom Known problem
Place the newly marked disks in a user-
defined storage pool. This protects them
from being used by system-defined
storage pools (and manual volume
management).
Temporary disks might be used by AVM
system-defined storage pools or
checkpoint extension.
AVM system-defined storage pools and
checkpoint extensions recognize tempo-
rary disks as available disks.
126 Managing Volumes and File Systems on VNX AVM 7.0
Troubleshooting
Table 9. Known problems and workarounds (continued)
Workaround Symptom Known problem
Alleviate this timing issue by lowering
the HWM on a file system to ensure
automatic extension can accommodate
normal file system activity.
Set the HWM to allow enough free
space in the file system to accommo-
date write operations to the largest av-
erage file in that file system. For exam-
ple, if you have a file system that is 100
GB, and the largest average file in that
file system is 20 GB, set the HWM for
automatic extension to 70%.
Changes made to the 20 GB file might
cause the file system to reach the
HWM, or 70 GB. There is 30 GB of
space left in the file system to handle
the file changes, and to initiate and
complete automatic extension without
failure.
An error message indicating the failure
of automatic extension start, and a full
file system.
In an NFS environment, the write activ-
ity to the file system starts immediately
when a file changes. When the file sys-
tem reaches the HWM, it begins to au-
tomatically extend but might not finish
before the Control Station issues a file
system full error. This causes an auto-
matic extension failure.
In a CIFS environment, the CIFS/Win-
dows Microsoft client does Persistent
Block Reservation (PBR) to reserve the
space before the writes begin. As a re-
sult, the file system full error occurs
before the HWM is reached and before
automatic extension is initiated.
Error messages
All event, alert, andstatus messages provide detailedinformation andrecommendedactions
to help you troubleshoot the situation.
To view message details, use any of these methods:
Unisphere software:
Right-click an event, alert, or status message and select to view Event Details, Alert
Details, or Status Details.
CLI:
Type nas_message -info <MessageID>, where <MessageID> is the message
identification number.
Celerra Error Messages Guide:
Use this guide to locate information about messages that are in the earlier-release
message format.
EMC Online Support website:
Error messages 127
Troubleshooting
Use the text from the error message's brief description or the message's ID to search
the Knowledgebase on the EMC Online Support website. After logging in to EMC

Online Support, locate the applicable Support by Product page, and search for the
error message.
EMC Training and Professional Services
EMCCustomer Educationcourses helpyoulearn howEMCstorage products work together
within your environment to maximize your entire infrastructure investment. EMCCustomer
Education features online and hands-on training in state-of-the-art labs conveniently located
throughout the world. EMCcustomer training courses are developed and delivered by EMC
experts. Go to the EMC Online Support website at http://Support.EMC.com for course and
registration information.
EMC Professional Services can help you implement your system efficiently. Consultants
evaluate your business, IT processes, and technology, and recommend ways that you can
leverage your information for the most benefit. From business plan to implementation, you
get the experience and expertise that you need without straining your IT staff or hiring and
training new personnel. Contact your EMC Customer Support Representative for more
information.
128 Managing Volumes and File Systems on VNX AVM 7.0
Troubleshooting
Glossary
A
automatic file system extension
Configurable file system feature that automatically extends a file system created or extended
with AVM when the high water mark (HWM) is reached.
See also high water mark.
Automatic Volume Management (AVM)
Feature of VNXfor file that creates andmanages volumes automatically without manual volume
management by an administrator. AVM organizes volumes into storage pools that can be
allocated to file systems.
See also thin provisioning.
D
disk volume
On a VNXfor file, a physical storage unit as exportedfromthe storage system. All other volume
types are created from disk volumes.
See also metavolume, slice volume, stripe volume, and volume.
F
File migration service
Feature for migrating file systems from NFS and CIFS source file servers to the VNX for file.
The online migration is transparent to users once it starts.
file system
Method of cataloging and managing the files and directories on a system.
Fully Automated Storage Tiering (FAST)
Lets you assign different categories of data to different types of storage media within a tiered
pool. Data categories may be based on performance requirements, frequency of use, cost, and
other considerations. The FAST feature retains the most frequently accessed or important data
Managing Volumes and File Systems on VNX AVM 7.0 129
on fast, high performance (more expensive) drives, and moves the less frequently accessed and
less important data to less-expensive (lower-performance) drives.
H
high water mark (HWM)
Trigger point at which the VNXfor file performs one or more actions, such as sending a warning
message, extending a volume, or updating a file system, as directed by the related feature's
software/parameter settings.
L
logical unit number (LUN)
Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is the
last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the
term is often used to refer to the logical unit itself.
M
mapped pool
A storage pool that is dynamically created during the normal storage discovery (diskmark)
process for use on the VNX for file. It is a one-to-one mapping with either a VNX storage pool
or a FAST Symmetrix Storage Group. A mapped pool can contain a mix of different types of
LUNs that use any combination of data services (thin, thick, auto-tiering, mirrored, and VNX
compression). However, mapped pools should contain only the same type of LUNs that use
the same data services (all thick, all thin, all the same auto-tiering options, all mirrored or none
mirrored, and all compressed or none compressed) for the best file system performance.
metavolume
On VNX for file, a concatenation of volumes, which can consist of disk, slice, or stripe volumes.
Also called a hypervolume or hyper. Every file system must be created on top of a unique
metavolume.
See also disk volume, slice volume, stripe volume, and volume.
S
slice volume
On VNX for file, a logical piece or specified area of a volume used to create smaller, more
manageable units of storage.
See also disk volume, metavolume, stripe volume, and volume.
storage pool
Groups of available disk volumes organized by AVM that are used to allocate available storage
to file systems. They can be created automatically by AVM or manually by the user.
See also Automatic volume management (AVM)
130 Managing Volumes and File Systems on VNX AVM 7.0
Glossary
stripe volume
Arrangement of volumes that appear as a single volume. Allows for stripe units that cut across
the volume and are addressed in an interlaced manner. Stripe volumes make load balancing
possible.
See also disk volume, metavolume, and slice volume.
system-defined storage pool
Predefined AVM storage pools that are set up to help you easily manage both storage volume
structures and file system provisioning by using AVM.
T
thin provisioning
Configurable VNX for file feature that lets you allocate storage based on long-term projections,
while you dedicate only the file system resources that you currently need. NFS or CIFS clients
and applications see the virtual maximum size of the file system of which only a portion is
physically allocated.
See also Automatic Volume Management.
U
Universal Extended File System (UxFS)
High-performance, VNXfor file default file system, basedon traditional Berkeley UFS, enhanced
with 64-bit support, metadata logging for high availability, and several performance
enhancements.
user-defined storage pools
User-created storage pools containing volumes that are manually added. User-defined storage
pools provide an appropriate option for users who want control over their storage volume
structures while still using the automated file system provisioning functionality of AVM to
provision file systems from the user-defined storage pools.
V
volume
On VNX for file, a virtual disk into which a file system, database management system, or other
application places data. A volume can be a single disk partition or multiple partitions on one
or more physical drives.
See also disk volume, metavolume, slice volume, and stripe volume.
Managing Volumes and File Systems on VNX AVM 7.0 131
Glossary
132 Managing Volumes and File Systems on VNX AVM 7.0
Glossary
Index
A
algorithm
automatic file system extension 58
Symmetrix 47
system-defined storage pools 39
VNX for block 42
attributes
storage pool, modify 109, 110, 113
storage pools 36
system-defined storage pools 110
user-defined storage pools 113
automatic file system extension
algorithm 58
and VNX Replicator interoperability
considerations 59
considerations 63
enabling 70
how it works 27
maximum size option 81
maximum size, set 95
options 26
restrictions 12
thin provisioning 96
Automatic Volume Management (AVM)
restrictions 11
storage pool 27
C
cautions 14, 15
spanning storage systems 14
character support, international 15
checkpoint, create for file system 100
clar_r1 storage pool 31
clar_r5_economy storage pool 31
clar_r5_performance storage pool 31
clar_r6 storage pool 31
clarata_archive storage pool 31
clarata_r10 storage pool 32
clarata_r3 storage pool 32
clarata_r6 storage pool 32
clarefd_r10 storage pool 32
clarefd_r5 storage pool 32
clarsas_archive storage pool 32
clarsas_r10 storage pool 32
clarsas_r6 storage pool 32
cm_r1 storage pool 32
cm_r5_economy storage pool 32
cm_r5_performance storage pool 32
cm_r6 storage pool 32
cmata_archive storage pool 32
cmata_r10 storage pool 33
cmata_r3 storage pool 33
cmata_r6 storage pool 33
cmefd_r10 storage pool 33
cmefd_r5 storage pool 33
cmsas_archive storage pool 33
cmsas_r10 storage pool 33
cmsas_r6 storage pool 33
considerations
automatic file system extension 63
interoperability 59
create a file system 70, 72, 74
using system-defined pools 72
using user-defined pools 74
D
data service policy
removing from storage group 15
delete user-defined storage pools 123
details, display 105
display
details 105
Managing Volumes and File Systems on VNX AVM 7.0 133
display (continued)
size information 106
E
EMC E-Lab Navigator 126
error messages 127
extend file systems
by size 85
by volume 87
with different storage pool 89
extend storage pools
system-defined by size 121
user-defined by size 119
user-defined by volume 118
F
FAST capacity algorithm and striping 16
file system
create checkpoint 100
extend by size 85
extend by volume 87
quotas 15
file system considerations 63
I
international character support 15
K
known problems and limitations 126
L
legacy CLARiiON and deleting thin items 15
M
masking option and moving LUNs 16
messages, error 127
migrating LUNs 16
modify system-defined storage pools 110
P
planning considerations 59
profiles, volume and storage 39
Q
quotas for file system 15
R
RAID group combinations 34
related information 22
restrictions 11, 12, 13, 14, 15
automatic file system extension 12
AVM 11
Symmetrix volumes 11
thin provisioning 13
TimeFinder/FS 15
VNX for block 14
S
storage pools
attributes 48
clar_r1 31
clar_r5_economy 31
clar_r5_performance 31
clar_r6 31
clarata_archive 31
clarata_r10 32
clarata_r3 32
clarata_r6 32
clarefd_r10 32
clarefd_r5 32
clarsas_archive 32
clarsas_r10 32
clarsas_r6 32
cm_r1 32
cm_r5_economy 32
cm_r5_performance 32
cm_r6 32
cmata_archive 32
cmata_r10 33
cmata_r3 33
cmata_r6 33
cmefd_r10 33
cmefd_r5 33
cmsas_archive 33
cmsas_r10 33
cmsas_r6 33
delete user-defined 123
display details 105
display size information 106
explanation 27
extend system-defined by size 121
134 Managing Volumes and File Systems on VNX AVM 7.0
Index
storage pools (continued)
extend user-defined by size 119
extend user-defined by volume 118
list 104
modify attributes 109
remove volumes from user-defined 122
supported types 30
symm_ata 31
symm_ata_rdf_src 31
symm_ata_rdf_tgt 31
symm_efd 31
symm_std 31
symm_std_rdf_src 31
symm_std_rdf_tgt 31
system-defined algorithms 39
system-defined Symmetrix 47
system-defined VNX for block 40
symm_ata storage pool 31
symm_ata_rdf_src storage pool 31
symm_ata_rdf_tgt storage pool 31
symm_efd storage pool 31
symm_std storage pool 31
symm_std_rdf_src storage pool 31
symm_std_rdf_tgt storage pool 31
Symmetrix and deleting thin items 15
Symmetrix pool, insufficient space 17
system-defined storage pools 39, 72, 85, 87, 110
algorithms 39
system-defined storage pools (continued)
create a file system with 72
extend file systems by size 85
extend file systems by volume 87
T
thin provisioning, out of space message 16
troubleshooting 125
U
Unicode characters 15
upgrade software 63
user-defined storage pools 74, 85, 87, 113, 122
create a file system with 74
extend file systems by size 85
extend file systems by volume 87
modify attributes 113
remove volumes 122
V
VNX for block pool, insufficient space 17
VNX upgrade
automatic file system extension issue 15
Managing Volumes and File Systems on VNX AVM 7.0 135
Index
136 Managing Volumes and File Systems on VNX AVM 7.0
Index

You might also like