You are on page 1of 19

Veritas Cluster File System (CFS)

CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster.

The CFS is designed with master/slave architecture. Though any node can initiate an operation to create, delete, or resize data, the master node carries out the actual operation. CFS caches the metadata in memory, typically in the memory buffer cache or the vnode cache. A distributed locking mechanism, called GLM, is used for metadata and cache coherency among the multiple nodes.

The examples here are :

1. Based on VCS 5.x but should also work on 4.x 2. A new 4 node cluster with no resources defined. 3. Diskgroups and volumes will be created and shared across all nodes.

Before you configure CFS

1. Make sure you have an established Cluster and running properly. 2. Make sure these packages are installed on all nodes:

VRTScavf Veritas cfs and cvm agents by Symantec VRTSglm Veritas LOCK MGR by Symantec

3. Make sure you have a license installed for Veritas CFS on all nodes. 4. Make sure vxfencing driver is active on all nodes (even if it is in disabled mode).

Check the status of the cluster

Here are some ways to check the status of your cluster. On these examples, CVM/CFS are not configured yet.

# cfscluster status NODE serverA serverB serverC serverD CLUSTER MANAGER STATE running running running running not-running not-running not-running not-running CVM STATE

Error: V-35-41: Cluster not configured for data sharing application

# vxdctl -c mode mode: enabled: cluster inactive

# /etc/vx/bin/vxclustadm nidmap Out of cluster: No mapping information available

# /etc/vx/bin/vxclustadm -v nodestate state: out of cluster

# hastatus -sum

-- SYSTEM STATE -- System State Frozen

A serverA A serverB A serverC A serverD

RUNNING RUNNING RUNNING RUNNING

0 0 0 0

Configure the cluster for CFS

During configuration, veritas will pick up all information that is set on your cluster configuration. And will activate CVM on all the nodes.

# cfscluster config

The cluster configuration information as read from cluster configuration file is as follows. Cluster : MyCluster Nodes : serverA serverB serverC serverD

You will now be prompted to enter the information pertaining to the cluster and the individual nodes.

Specify whether you would like to use GAB messaging or TCP/UDP messaging. If you choose gab messaging then you will not have to configure IP addresses. Otherwise you will have to provide

IP addresses for all the nodes in the cluster.

------- Following is the summary of the information: -----Cluster Nodes Transport : MyCluster : serverA serverB serverC serverD : gab

-----------------------------------------------------------

Waiting for the new configuration to be added.

========================================================

Cluster File System Configuration is in progress... cfscluster: CFS Cluster Configured Successfully

Check the status of the cluster

Now let's check the status of the cluster. And notice that there is now a new service group cvm. CVM is required to be online before we can bring up any clustered filesystem on the nodes.

# cfscluster status

Node

: serverA

Cluster Manager : running

CVM state

: running

No mount point registered with cluster configuration

Node

: serverB

Cluster Manager : running CVM state : running

No mount point registered with cluster configuration

Node

: serverC

Cluster Manager : running CVM state : running

No mount point registered with cluster configuration

Node

: serverD

Cluster Manager : running CVM state : running

No mount point registered with cluster configuration

# vxdctl -c mode mode: enabled: cluster active - MASTER master: serverA

# /etc/vx/bin/vxclustadm nidmap Name serverA serverB serverC serverD CVM Nid CM Nid 0 1 2 3 0 1 2 3 State

Joined: Master Joined: Slave Joined: Slave Joined: Slave

# /etc/vx/bin/vxclustadm -v nodestate state: cluster member nodeId=0 masterId=1 neighborId=1 members=0xf joiners=0x0 leavers=0x0 reconfig_seqnum=0xf0a810 vxfen=off

# hastatus -sum

-- SYSTEM STATE -- System State Frozen

A serverA A serverB

RUNNING RUNNING

0 0

A serverC A serverD

RUNNING RUNNING

0 0

-- GROUP STATE -- Group System Probed AutoDisabled State

B cvm B cvm B cvm B cvm

serverA serverB serverC serverD

Y Y Y Y

N N N N

ONLINE ONLINE ONLINE ONLINE

Creating a Shared Disk Group and Volumes/Filesystems

This procedure creates a shared disk group for use in a cluster environment. Disks must be placed in disk groups before they can be used by the Volume Manager.

When you place a disk under Volume Manager control, the disk is initialized. Initialization destroys any existing data on the disk.

Before you begin, make sure the disks that you add to the shared-disk group must be directly attached to all the cluster nodes.

First, make sure you are on the master node:

serverA # vxdctl -c mode mode: enabled: cluster active - MASTER master: serverA

Initialize the disks you want to use. Make sure they are attached to all the cluster nodes. You may optionally specify the disk format.

serverA # vxdisksetup -if EMC0_1 format=cdsdisk serverA # vxdisksetup -if EMC0_2 format=cdsdisk

Create a shared disk group with the disks you just initialized.

serverA # vxdg -s init mysharedg mysharedg01=EMC0_1 mysharedg02=EMC0_2

serverA # vxdg list mysharedg enabled,shared,cds 1231954112.163.serverA

Now let's add that new disk group in our cluster configuration. Giving all nodes in the cluster an option for Shared Write (sw).

serverA # cfsdgadm add mysharedg all=sw Disk Group is being added to cluster configuration...

Verify that the cluster configuration has been updated.

serverA # grep mysharedg /etc/VRTSvcs/conf/config/main.cf ActivationMode @serverA = { mysharedg = sw } ActivationMode @serverB = { mysharedg = sw } ActivationMode @serverC = { mysharedg = sw } ActivationMode @serverD = { mysharedg = sw }

serverA # cfsdgadm display Node Name : serverA DISK GROUP mysharedg ACTIVATION MODE sw

Node Name : serverB DISK GROUP mysharedg ACTIVATION MODE sw

Node Name : serverC DISK GROUP mysharedg ACTIVATION MODE sw

Node Name : serverD DISK GROUP ACTIVATION MODE

mysharedg

sw

We can now create volumes and filesystems within the shared diskgroup.

serverA # vxassist -g mysharedg make mysharevol1 100g serverA # vxassist -g mysharedg make mysharevol2 100g

serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol1 serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol2

Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mountpoints will be automatically created.

serverA # cfsmntadm add mysharedg mysharevol1 /mountpoint1 Mount Point is being added... /mountpoint1 added to the cluster-configuration

serverA # cfsmntadm add mysharedg mysharevol2 /mountpoint2 Mount Point is being added... /mountpoint2 added to the cluster-configuration

Display the CFS mount configurations in the cluster.

serverA # cfsmntadm display -v Cluster Configuration for Node: apqma519 MOUNT POINT TYPE SHARED VOLUME mysharevol1 mysharevol2 DISK GROUP STATUS MOUNT OPTIONS

/mountpoint1 Regular /mountpoint2 Regular

mysharedg mysharedg

NOT MOUNTED crw NOT MOUNTED crw

That's it. Check you cluster configuration and try to ONLINE the filesystems on your nodes.

serverA # hastatus -sum

-- SYSTEM STATE -- System State Frozen

A serverA A serverB A serverC A serverD

RUNNING RUNNING RUNNING RUNNING

0 0 0 0

-- GROUP STATE -- Group System Probed AutoDisabled State

B cvm B cvm

serverA serverB

Y Y

N N

ONLINE ONLINE

B cvm B cvm

serverC serverD

Y Y

N N

ONLINE ONLINE Y Y Y Y Y Y Y Y N N N N N N N N OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE

B vrts_vea_cfs_int_cfsmount1 serverA B vrts_vea_cfs_int_cfsmount1 serverB B vrts_vea_cfs_int_cfsmount1 serverC B vrts_vea_cfs_int_cfsmount1 serverD B vrts_vea_cfs_int_cfsmount2 serverA B vrts_vea_cfs_int_cfsmount2 serverB B vrts_vea_cfs_int_cfsmount2 serverC B vrts_vea_cfs_int_cfsmount2 serverD

VCS - Adding NIC/IP Resource haconf -makerw

# hares -add vvrnic NIC db2inst_grp VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors

# hares -modify vvrnic Device ce3

# hares -modify vvrnic NetworkType ether

# hares -add vvrip IP db2inst_grp VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors

# hares -modify vvrip Device ce3 # hares -modify vvrip Address "10.67.196.191" # hares -modify vvrip NetMask "255.255.254.0"

# hares -link vvrip vvrnic

# hagrp -enableresources db2inst_grp

# hares -online vvrip -sys server620

# haconf -dump makero

VCS - Adding Filesystem Resource How to create a file system using VERITAS Volume Manager, controlled under VERITAS Cluster Server

Details:

Following is the algorithm to create a volume, file system and put them under VERITAS Cluster Server (VCS).

1. Create a disk group

2. Create a mount point and file system 3. Deport a disk group 4. Create a service group

Add following resources and modify attributes:

Resources Name Attributes 1. Disk group, disk group name 2. Mount block device, FSType, MountPoint

Create dependency between following resources:

1. Mount and disk group

Enable all resources in this service group.

The following example shows how to create a raid-5 volume with a VxFS file system and put it under VCS control.

Method 1 - Using the command line

1. Create a disk group using Volume Manager with a minimum of 4 disks:

# vxdg init datadg disk01=c1t1d0s2 disk02=c1t2d0s2 disk03=c1t3d0s2 disk04=c1t4d0s2 # vxassist -g datadg make vol01 2g layout=raid5

2. Create a mount point for this volume:

# mkdir /vol01

3. Create a file system on this volume:

# mkfs -F vxfs /dev/vx/rdsk/datadg/vol01

4. Deport this disk group:

# vxdg deport datadg

5. Create a service group:

# haconf -makerw # hagrp -add newgroup # hagrp -modify newgroup SystemList <sysa> 0 <sysb> 1 # hagrp -modify newgroup AutoStartList <sysa>

6. Create a disk group resource and modify its attributes:

# hares -add data_dg DiskGroup newgroup # hares -modify data_dg DiskGroup datadg

7. Create a mount resource and modify its attributes:

# hares -add vol01_mnt Mount newgroup # hares -modify vol01_mnt BlockDevice /dev/vx/dsk/datadg/vol01 # hares -modify vol01_mnt FSType vxfs # hares -modify vol01_mnt MountPoint /vol01 # hares -modify vol01_mnt FsckOpt %-y

8. Link the mount resource to the disk group resource:

# hares -link vol01_mnt data_dg

9. Enable the resources and close the configuration:

# hagrp -enableresources newgroup # haconf -dump -makero

Method 2 - Editing /etc/VRTSvcs/conf/config/main.cf

# hastop -all # cd /etc/VRTSvcs/conf/config # haconf -makerw # vi main.cf

Add the following line to end of this file:

group newgroup ( SystemList = { sysA =0, sysB=1} AutoStartList = { sysA } )

DiskGroup data_dg ( DiskGroup = datadg )

Mount vol01_mnt ( MountPoint = "/vol01" BlockDevice = " /dev/vx/dsk/datadg/vol01" FSType = vxfs )

vol01_mnt requires data_dg

# haconf -dump -makero # hastart -local

Check status of the new service group.

------------------------------------------------------------------------------------

Here's an actual example.

# umount /backup/pdpd415

# vxdg deport bkupdg

# haconf -makerw

# hares -add bkup_dg DiskGroup pdpd415_grp # hares -modify bkup_dg DiskGroup bkupdg

# hares -add bkupdg_bkup_mnt Mount pdpd415_grp # hares -modify bkupdg_bkup_mnt BlockDevice /dev/vx/dsk/bkupdg/bkupvol # hares -modify bkupdg_bkup_mnt FSType vxfs # hares -modify bkupdg_bkup_mnt MountPoint /backup/pdpd415 # hares -modify bkupdg_bkup_mnt FsckOpt %-y

# hares -link bkupdg_bkup_mnt bkup_dg

# hagrp -enableresources pdpd415_grp

# hares -online bkup_dg -sys sppwd620 # hares -online bkupdg_bkup_mnt -sys sppwd620

# haconf -dump makero

You might also like