Professional Documents
Culture Documents
CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster.
The CFS is designed with master/slave architecture. Though any node can initiate an operation to create, delete, or resize data, the master node carries out the actual operation. CFS caches the metadata in memory, typically in the memory buffer cache or the vnode cache. A distributed locking mechanism, called GLM, is used for metadata and cache coherency among the multiple nodes.
1. Based on VCS 5.x but should also work on 4.x 2. A new 4 node cluster with no resources defined. 3. Diskgroups and volumes will be created and shared across all nodes.
1. Make sure you have an established Cluster and running properly. 2. Make sure these packages are installed on all nodes:
VRTScavf Veritas cfs and cvm agents by Symantec VRTSglm Veritas LOCK MGR by Symantec
3. Make sure you have a license installed for Veritas CFS on all nodes. 4. Make sure vxfencing driver is active on all nodes (even if it is in disabled mode).
Here are some ways to check the status of your cluster. On these examples, CVM/CFS are not configured yet.
# cfscluster status NODE serverA serverB serverC serverD CLUSTER MANAGER STATE running running running running not-running not-running not-running not-running CVM STATE
# hastatus -sum
0 0 0 0
During configuration, veritas will pick up all information that is set on your cluster configuration. And will activate CVM on all the nodes.
# cfscluster config
The cluster configuration information as read from cluster configuration file is as follows. Cluster : MyCluster Nodes : serverA serverB serverC serverD
You will now be prompted to enter the information pertaining to the cluster and the individual nodes.
Specify whether you would like to use GAB messaging or TCP/UDP messaging. If you choose gab messaging then you will not have to configure IP addresses. Otherwise you will have to provide
------- Following is the summary of the information: -----Cluster Nodes Transport : MyCluster : serverA serverB serverC serverD : gab
-----------------------------------------------------------
========================================================
Cluster File System Configuration is in progress... cfscluster: CFS Cluster Configured Successfully
Now let's check the status of the cluster. And notice that there is now a new service group cvm. CVM is required to be online before we can bring up any clustered filesystem on the nodes.
# cfscluster status
Node
: serverA
CVM state
: running
Node
: serverB
Node
: serverC
Node
: serverD
# /etc/vx/bin/vxclustadm nidmap Name serverA serverB serverC serverD CVM Nid CM Nid 0 1 2 3 0 1 2 3 State
# /etc/vx/bin/vxclustadm -v nodestate state: cluster member nodeId=0 masterId=1 neighborId=1 members=0xf joiners=0x0 leavers=0x0 reconfig_seqnum=0xf0a810 vxfen=off
# hastatus -sum
A serverA A serverB
RUNNING RUNNING
0 0
A serverC A serverD
RUNNING RUNNING
0 0
Y Y Y Y
N N N N
This procedure creates a shared disk group for use in a cluster environment. Disks must be placed in disk groups before they can be used by the Volume Manager.
When you place a disk under Volume Manager control, the disk is initialized. Initialization destroys any existing data on the disk.
Before you begin, make sure the disks that you add to the shared-disk group must be directly attached to all the cluster nodes.
serverA # vxdctl -c mode mode: enabled: cluster active - MASTER master: serverA
Initialize the disks you want to use. Make sure they are attached to all the cluster nodes. You may optionally specify the disk format.
serverA # vxdisksetup -if EMC0_1 format=cdsdisk serverA # vxdisksetup -if EMC0_2 format=cdsdisk
Create a shared disk group with the disks you just initialized.
Now let's add that new disk group in our cluster configuration. Giving all nodes in the cluster an option for Shared Write (sw).
serverA # cfsdgadm add mysharedg all=sw Disk Group is being added to cluster configuration...
serverA # grep mysharedg /etc/VRTSvcs/conf/config/main.cf ActivationMode @serverA = { mysharedg = sw } ActivationMode @serverB = { mysharedg = sw } ActivationMode @serverC = { mysharedg = sw } ActivationMode @serverD = { mysharedg = sw }
serverA # cfsdgadm display Node Name : serverA DISK GROUP mysharedg ACTIVATION MODE sw
mysharedg
sw
We can now create volumes and filesystems within the shared diskgroup.
serverA # vxassist -g mysharedg make mysharevol1 100g serverA # vxassist -g mysharedg make mysharevol2 100g
Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mountpoints will be automatically created.
serverA # cfsmntadm add mysharedg mysharevol1 /mountpoint1 Mount Point is being added... /mountpoint1 added to the cluster-configuration
serverA # cfsmntadm add mysharedg mysharevol2 /mountpoint2 Mount Point is being added... /mountpoint2 added to the cluster-configuration
serverA # cfsmntadm display -v Cluster Configuration for Node: apqma519 MOUNT POINT TYPE SHARED VOLUME mysharevol1 mysharevol2 DISK GROUP STATUS MOUNT OPTIONS
mysharedg mysharedg
That's it. Check you cluster configuration and try to ONLINE the filesystems on your nodes.
0 0 0 0
B cvm B cvm
serverA serverB
Y Y
N N
ONLINE ONLINE
B cvm B cvm
serverC serverD
Y Y
N N
ONLINE ONLINE Y Y Y Y Y Y Y Y N N N N N N N N OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE
B vrts_vea_cfs_int_cfsmount1 serverA B vrts_vea_cfs_int_cfsmount1 serverB B vrts_vea_cfs_int_cfsmount1 serverC B vrts_vea_cfs_int_cfsmount1 serverD B vrts_vea_cfs_int_cfsmount2 serverA B vrts_vea_cfs_int_cfsmount2 serverB B vrts_vea_cfs_int_cfsmount2 serverC B vrts_vea_cfs_int_cfsmount2 serverD
# hares -add vvrnic NIC db2inst_grp VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
# hares -add vvrip IP db2inst_grp VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
# hares -modify vvrip Device ce3 # hares -modify vvrip Address "10.67.196.191" # hares -modify vvrip NetMask "255.255.254.0"
VCS - Adding Filesystem Resource How to create a file system using VERITAS Volume Manager, controlled under VERITAS Cluster Server
Details:
Following is the algorithm to create a volume, file system and put them under VERITAS Cluster Server (VCS).
2. Create a mount point and file system 3. Deport a disk group 4. Create a service group
Resources Name Attributes 1. Disk group, disk group name 2. Mount block device, FSType, MountPoint
The following example shows how to create a raid-5 volume with a VxFS file system and put it under VCS control.
# vxdg init datadg disk01=c1t1d0s2 disk02=c1t2d0s2 disk03=c1t3d0s2 disk04=c1t4d0s2 # vxassist -g datadg make vol01 2g layout=raid5
# mkdir /vol01
# haconf -makerw # hagrp -add newgroup # hagrp -modify newgroup SystemList <sysa> 0 <sysb> 1 # hagrp -modify newgroup AutoStartList <sysa>
# hares -add data_dg DiskGroup newgroup # hares -modify data_dg DiskGroup datadg
# hares -add vol01_mnt Mount newgroup # hares -modify vol01_mnt BlockDevice /dev/vx/dsk/datadg/vol01 # hares -modify vol01_mnt FSType vxfs # hares -modify vol01_mnt MountPoint /vol01 # hares -modify vol01_mnt FsckOpt %-y
------------------------------------------------------------------------------------
# umount /backup/pdpd415
# haconf -makerw
# hares -add bkup_dg DiskGroup pdpd415_grp # hares -modify bkup_dg DiskGroup bkupdg
# hares -add bkupdg_bkup_mnt Mount pdpd415_grp # hares -modify bkupdg_bkup_mnt BlockDevice /dev/vx/dsk/bkupdg/bkupvol # hares -modify bkupdg_bkup_mnt FSType vxfs # hares -modify bkupdg_bkup_mnt MountPoint /backup/pdpd415 # hares -modify bkupdg_bkup_mnt FsckOpt %-y
# hares -online bkup_dg -sys sppwd620 # hares -online bkupdg_bkup_mnt -sys sppwd620