Professional Documents
Culture Documents
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Course 9211
Version 1.1.3
Lab Manual
January 11,2011
Proprietary Statement
Disclaimer
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Novell, Inc.
404 Wyman Street, Suite 500
Waltham, MA 02451
U.S.A.
www.novell.com
Further, Novell, Inc., reserves the right to revise this publication and to make
changes to its content, at any time, without obligation to notify any person or
entity of such revisions or changes. Further, Novell, Inc., makes no
representations or warranties with respect to any software, and specifically
disclaims any express or implied warranties of merchantability or fitness for any
particular purpose. Further, Novell, Inc., reserves the right to make changes to
any and all parts of Novell software, at any time, without any obligation to notify
any person or entity of such changes.
Novell Trademarks
For Novell trademarks, see the Novell Trademark and Service Mark list
(http://www.novell.com/company/legal/trademarks/tmlist.html).
Third-Party Materials
Software Piracy
This Novell Training Manual is published solely to instruct students in the use of
Novell networking software. Although third-party application software packages
are used in Novell training courses, this is for demonstration purposes only and
shall not constitute an endorsement of any of these software applications.
Further, Novell, Inc. does not represent itself as having any particular expertise
in these application software packages and any use by students of the same shall
be done at the students own risk.
Contents
Section 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Section 2
Exercise 2.1
Exercise 2.2
Section 3
Exercise 3.1
Section 4
Exercise 4.1
Exercise 4.2
Exercise 4.3
Section 5
Exercise 5.1
Exercise 5.2
Exercise 5.3
Exercise 5.4
Exercise 5.5
Exercise 5.6
Version 1
Exercise 6.1
Exercise 6.2
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Exercise 6.3
Section 6
Exercise 6.4
Section 7
Exercise 7.1
Exercise 7.2
Exercise 7.3
Exercise 7.4
Exercise 7.5
Exercise 7.6
Exercise 7.7
Exercise 7.8
Section 8
Exercise 8.1
Exercise 8.2
Exercise 8.3
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
Exercise 8.4
Exercise 8.5
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Exercise 8.6
Exercise 8.7
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
List of Figures
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
y
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
N
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
y
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
N
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
10
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
y
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
N
Version 1
11
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
12
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
This section introduces you to high availability clustering with the High Availability
Extension for SUSE Linux Enterprise 11.
Version 1
13
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
14
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
This section covers the installation and configuration of the high availability clustering
components in the SUSE Linux Enterprise 11 High Availability Extension.
Version 1
15
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you add the SUSE Linux Enterprise High Availability Extension as a
software installation source and then install the HA Extension components.
4. Insert the SLE11 HA Extension product CD/DVD (if you are running a virtual
machine, attach the SLE HA Extension ISO to the VM) and click Continue
5. On the License Agreement screen, select Yes, I agree to the License Agreement
and click Next
6. On the Software Selection and System Tasks screen, select the High Availability
pattern and click OK
7. On the Novell Customer Center Configuration screen, select Configure Later and
then click Next
8. You should now see the SUSE Linux Enterprise High Availability Extension in the
list of installed products. Click OK to finish
(End of Exercise)
16
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this section, you use the YaST Cluster module to configure an HA cluster with the SLE11
HE Extension. You then copy the cluster configuration files to the other cluster nodes and
start the cluster daemon on them as well.
2. On the Cluster Communication Channels screen, in the Channel section, from the
Bind Network Address drop-down list, select the network IP of the LAN network
5. In the Node ID section, ensure that the Auto Generate Node ID check box is not
checked
9. On the Cluster Security screen, put a check in the Enable Security Auth checkbox
10. In the Threads field, enter 1
11. Click Generate Auth Key File
When the pop-up window appears, click OK
12. Click Next, (or if listed in the left-hand panel, select Service)
13. Select On -- Start openais at booting
Version 1
17
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
17. Enter the hostname of the first cluster node (as returned by the hostname
Repeat this for each cluster node's hostname
19. In the newly populated list of files in the Sync File list, select
NOTE: Because we are using manually assigned node IDs, we cannot use csync2
to synchronize the corosync.conf file
25. Open a terminal window and if not already logged in as the root user, enter su
to become root. If prompted for the root user's password, enter novell
26. Enter the following command to enable and start the xinetd daemon:
insserv xinetd
rcxinetd start
27. Enter the following command to change the hacluster user's password:
passwd hacluster
crm_mon -ri 1
You should see that the cluster is up and running with a single node
Leave this terminal window open with the crm_mon utility running
18
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
ssh root@node2
5. Find the line that begins with nodeid: and increment it to match the node's number
(i.e. use 2 for node2, 3 for node3, etc.)
In the terminal window running crm_mon you should see a new cluster node listed
as being Online
chkconfig csync2 on
insserv xinetd
rcxinetd start
9. Set the hacluster user's password on this node to novell in the same manner as in
the previous task
10. Repeat the previous steps in this task for each of the other cluster nodes
Version 1
(End of Exercise)
19
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
20
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
This section introduces you the Cluster Information Base (CIB) and the tools used to
manage the cluster.
Version 1
21
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
It is important to note that you should only disable STONITH if you are going to use cluster
resource that don't require STONITH. Even in those cases it is still recommended that you
enable and use STONITH anyway.
Objectives:
The cluster must be running and have quorum to perform this exercise
2. Select Connection > Login from the menu bar (or click the Login to Cluster
button on the button bar)
3. In the Login pop-up window, enter the following information in the relative fields
and click OK to login:
Server:(Port): 127.0.0.1
5. On the Policy Engine tab, in the Default Resource Stickiness field, enter 1000
6. Click Apply
NOTE: You could also set the defualt-resource-stickiness value using the crm
command as follows:
crm configure
property default-resource-stickiness=1000
commit
quit
Some cluster defaults have now been explicitly set
22
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
crm configure
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
property stonith-enabled=false
NOTE: You could also disable STONITH using the crm command as follows:
commit
quit
(End of Exercise)
Version 1
23
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
24
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
This section introduces you the concept of cluster resources and resource agents.
Version 1
25
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
The cluster must be running and have quorum to perform this exercise
CLUSTER_IP1=____________________________
CLUSTER_IP1_SNM=_______________________
CLUSTER_IP1_NIC=________________________
2. Select Connection > Login from the menu bar (or click the Login to Cluster
button on the button bar)
3. In the Login pop-up window, enter the following information in the relative fields
and click OK to login:
Server:(Port): 127.0.0.1
6. On the Add Primitive Basic Settings screen enter or select from drop-down list(s)
and/or check-boxes the following information:
ID: IP_1
Class: ocf
Provider: heartbeat
Type: IPaddr2
Initial state of resource: Stopped
Add monitor operation: (checked)
7. Click Forward
8. On the Add Primitive Summary of ... screen, select the Instance Attributes tab
26
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
14. Click OK
NOTE: You could also create the IP address cluster resource with the crm
command as follows:
crm configure
commit
You should now see a new resource listed in the list of cluster resources
2. Select the IP Address resource and click the Start Resource button on the button
bar (the button looks like a right pointing triangle)
3. To show the resources relative to the cluster nodes, click the Group Resources by
Node button on the top left of the button bar (the button looks like 4 Green dots in
Version 1
(End of Exercise)
27
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure a vsftp server to be a cluster managed resource using the
crm command line.
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Enter the following command to open the crm CLI in configure mode:
crm configure
4. View the resource state from the command line by entering the following
command:
crm_mon -ri 1
You should see the new resource listed along with its state
1. Open another terminal window. If not already logged in as the root user, use su
to become root using the same password as above.
2. Enter the following command to start the new resource:
crm resource start vsftp
In the crm_mon terminal window you should see the new vsftp resource listed as
started
3. In the crm_mon terminal window, press Ctrl+C to to close the crm_mon utility
28
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
(End of Exercise)
Version 1
29
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure and IP address and vsftpd be a cluster managed resources in
a resource group using the crm command line.
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Enter the following command to open the crm CLI in configure mode:
crm configure
3. Enter the following commands to create a resource group for IP_1, vsftp:
group ftpgrp IP_1 vsftp
commit
quit
4. View the resource group state from the command line by entering the following
command:
crm_mon -ri 1
You should see the new resource group listed along with its state
1. Open another terminal window. If not already logged in as the root user, use su
to become root using the same password as above.
2. Enter the following command to stat the resource group:
crm resource start ftpgrp
In the crm_mon terminal window you should see the new resource group listed as
started
30
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
You may easily do this by using the crm configure edit command to open
the cluster configuration (in vi) and deleting the meta target-role= lines from the
IP_1 and vsftp primitive resources. Make sure you save the edited configuration
by issuing a commit command once you are back at the crm(live)configure#
prompt.
3. In the crm_mon terminal window, press Ctrl+C to to close the crm_mon utility
(End of Exercise)
Version 1
31
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
32
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Version 1
33
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise you will use the LVM command line commands to create an LVM volume
group from multiple disks.
Objectives:
Perform this exercise on the Storage1 machine (unless otherwise directed by the instructor).
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Determine what type of disk (PATA=hda vs. SATA=sda) your machine is using by
entering the following command at the command line of the Storage1 machine:
fdisk -l
disks:_______________________________________
5. For each of the disks that will be used in the volume group, use the following
command to create LVM Physical Volume(s) replacing BLOCKDEV with the
block device filename of the disk:
pvcreate /dev/BLOCKDEV
Repeat this command for each of the disks that will be used in the volume group
6. Use the following command to create an LVM Volume Group named vg0 (list
block devices as a space delimited list):
vgcreate vg0 /dev/BLOCKDEV_1 /dev/BLOCKDEV_2
34
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
pvs
pvdisplay
(End of Exercise)
Version 1
35
In this exercise, you will create a 4GB logical volume in the vg1 volume group.
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
Perform this exercise on the Storage1 server (unless otherwise directed by the instructor)
An LVM volume group named vg0 with at least 4GB of free space must exist to
successfully complete this exercise.
1. Use the following commands to create 1 Logical Volumes in the vg0 Volume
Group:
lvcreate
-L
4G
-n
data1
vg0
2. Verify that the logical volume was created by entering the following command:
lvs
(End of Exercise)
36
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise you will use the CLI tools to configure an iSCSI target server.
Perform this exercise on the storage1 machine (unless directed otherwise by the instructor).
1. On the iSCSI target server open a terminal window and if not already logged in as
the root user, enter su to become root. When prompted for the root user's
2. Enter the following commands to enable the iscsi target server to start at boot time
and then start now:
insserv iscsitarget
rciscsitarget start
(End of Exercise)
Version 1
37
5.4 Create an iSCSI Target for the data1 LUN from the CLI
Description:
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise you will use the command line iSCSI tools to create a target containing 1
LUN on the iSCSI target server .
Objectives:
Perform this exercise on the iSCSI target server machine (most likely storage1) as directed.
You must have an LVM volume group named vg0 that contains at least 1 4GB logical
volume name data1 to perform this exercise.
1. In the text editor of your choice, open the /etc/ietd.conf file to be edited (as the
root user).
2. If it exists, and is not already commented out, comment out the example target line
as follows:
#Target iqn.2001-04.com.example:storage.disk2.sys1.xyz
Lun 0 Path=/dev/vg0/data1,Type=fileio,ScsiId=data1-0
rciscsitarget restart
38
(End of Exercise)
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise you will use the CLI tools to configure and then discover targets with an
iSCSI initiator.
Objectives:
You must have at least one target already configured on the target server to complete this
exercise.
You will need the IP address of the iSCSI Target Server.
TARGET_LAN_IP= _______________________________
TARGET_SAN1_IP=_______________________________
TARGET_SAN2_IP=_______________________________
TARGET_DRBD_IP=_______________________________
TARGET_NAME=______________________________
1. On Node1, open a terminal window and if not already logged in as the root user,
enter su to become root. When prompted for the root user's password, enter
novell
InitiatorName=iqn.1996-04.de.suse:NODENAME
4. In the text editor of your choice, open the /etc/iscsi/iscsid.conf file to be edited.
Edit the following line to match:
node.startup = automatic
Version 1
39
novell
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
3. Enter the following command to delete the unneeded portals to the target on the
target server from the iSCSI initiator's discovered target database:
iscsiadm -m node -o delete -p TARGET_LAN_IP
6. Enter the following command to see the disks the machine is now connected to:
ls -l /dev/disk/by-path | grep iscsi
You should see a list of the iscsi disks the machine is connected to
(End of Exercise)
40
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
The cluster must be running and have quorum to perform this exercise
DEVICE=_______________________________________
DIRECTORY=___________________________________
FSTYPE=_______________________________________
2. Select Connection > Login from the menu bar (or click the Login to Cluster
button on the button bar)
3. In the Login pop-up window, enter the following information in the relative fields
and click OK to login:
Server:(Port): 127.0.0.1
6. On the Add Primitive Basic Settings screen enter or select from drop-down list(s)
7. Click Forward
Version 1
41
13. Select the fstype attribute from the attributes list and click Edit
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
14. In the nvpair window, in the value field, enter FSTYPE and click OK
15. Click OK
You should now see a new resource listed in the list of cluster resources
NOTE: You could also create the cluster resource with the crm command as
follows:
crm configure
17. Open a terminal window and if not already logged in as the root user, enter su
to become root. When prompted for the root user's password, enter novell
18. Enter the following command to create the mount point directory for the cluster
managed storage volume:
mkdir -p DIRECTORY
Repeat this step on each of the cluster nodes that the cluster managed storage
19. On only one of the cluster nodes, enter the following command to create a file
system on the SAN LUN:
mkfs.FSTYPE DEVICE
42
Answer Y to format the entire device and not just a partition on the device.
WARNING: If you have and SBD STONITH device configured, due to
limitations in the lab environment, you may want to stop the SBD cluster resource
before formatting the SAN volume. You may start it again after the formating is
complete.
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
3. To show the resources relative to the cluster nodes, click the Groups Resources by
Node button on the top left of the button bar (the button looks like 4 Green dots in
a vertical line next to a downward pointing arrow)
(End of Exercise)
Version 1
43
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
This section covers how to configure an HA cluster to avoid, detect, and manage a splitbrain scenario.
44
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise, you create a 4MB logical volume in the vg0 volume group for use as an
SBD device in a cluster.
Objectives:
Perform this exercise on the Storage1 server (unless otherwise directed by the instructor)
An LVM volume group named vg0 with at least 4MB of free space must exist to
successfully complete this exercise.
1. Use the following commands to create 1 Logical Volumes in the vg0 Volume
Group:
lvcreate
-L
4M
-n
sbd
vg0
2. Verify that the logical volume was created by entering the following command:
lvs
(End of Exercise)
Version 1
45
6.2 Create an iSCSI Target for the SBD Device from the
CLI
Description:
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise you will use the command line iSCSI tools to create a target containing 1
LUN on the iSCSI target server .
Objectives:
Perform this exercise on the iSCSI target server machine (most likely storage1) as directed.
You must have an LVM volume group named vg0 that contains at least 1 logical volume
named sbd to perform this exercise.
Lun 0 Path=/dev/vg0/sbd,Type=fileio,ScsiId=sbd-0
(End of Exercise)
46
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise you will use the CLI tools to discover targets with an iSCSI initiator.
You must have at least one target already configured on the target server to complete this
exercise.
You will need the IP address of the iSCSI Target Server.
TARGET_LAN_IP= _______________________________
TARGET_SAN1_IP=_______________________________
TARGET_SAN2_IP=_______________________________
TARGET_DRBD_IP=_______________________________
TARGET_NAME=______________________________
1. On Node1, open a terminal window and if not already logged in as the root user,
enter su to become root. When prompted for the root user's password, enter
novell
3. Enter the following command to delete the unneeded portals to the target on the
target server from the iSCSI initiator's discovered target database:
iscsiadm -m node -o delete -p TARGET_LAN_IP
Version 1
47
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
(End of Exercise)
48
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure an SBD STONITH to be a cluster resource to proved node
fencing.
The cluster must be running and have quorum to perform this exercise
The SBD LUN must be connected to perform this exercise
SBD_DEVICE=__________________________________________
VHOST1=_______________________________________________
VHOST2=_______________________________________________
1. On Node1, if not already logged in as the root user, enter su to become root.
When prompted for the root user's password, enter novell
2. In the text editor of your choice, open (or create if missing) the /etc/sysconfig/sbd
file.
3. Add (or edit if the already exist) the following lines to match:
SBD_DEVICE=SBD_DEVICE
SBD_OPTS=-W
5. Enter the following commands to synchronize this file with the other cluster nodes:
csyn2 -f /etc/sysconfig/sbd
csync2 -xv
Version 1
49
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
MODULES_LOADED_ON_BOOT variable
11. Enter the following command to copy the modified file to the other cluster nodes:
scp /etc/sysconfig/kernel VHOST2:/etc/sysconfig
Repeat this command for each cluster node
13. Enter the following command to create the node's slot on the SBD device:
sbd -d SBD_DEVICE allocate NODE_NAME
Repeat this command on the other cluster nodes
NOTE: This step is not required, however it is a good idea because it is a way to
manually ensure that the cluster nodes get a slot on the SBD device.
14. Enter the following command to verify the SBD device was initialized:
sbd -d SBD_DEVICE dump
15. Restart the openais daemon to have it start the SBD daemon:
rcopenais restart
IMPORTANT: If the sbd device is not connected to the cluster node when the
corosync/openais daemon tries to start, the openais daemon will not start
16. Enter the following command to see that the SBD daemon is writing to the SBD
device:
sbd -d SBD_DEVICE list
For each node that has started the SBD daemon you should see the node's name
listed in a slot and the nodes status
50
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
3. In the Login pop-up window, enter the following information in the relative fields
Server:(Port): 127.0.0.1
5. On the Policy Engine tab, ensure that Stontih Enabled value is checked
8. On the Add Primitive Basic Settings screen enter or select from drop-down list(s)
and/or check-boxes the following information:
ID: SBD
Class: stonith
Type: external/sbd
9. Click Forward
10. On the Add Primitive Summary of ... screen, select the Instance Attributes tab
11. Select sbd_device from the attributes list and click Edit
12. In the Edit nvpair window, in the value field, enter SBD_DEVICE and then click
OK
13. Back on the Add Primitive Summary of ... screen click Apply
You should now see a new primitive resource listed in the list of cluster resources
on the Primitive tab
NOTE: You could also create the STONITH cluster resource using the crm
command as follows:
crm configure
property stonith-enabled=true
primitive SBD stonith:external/sbd \
Version 1
51
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
2. Select the first instance of the SBD resource and click the Start Resource button
on the button bar (the button looks like a right pointing triangle)
You should see the that SBD resource is now running on a cluster node
3. To show the resources relative to the cluster nodes, click the Groups Resources by
Node button on the top left of the button bar (the button looks like 4 Green dots in
a vertical line next to a downward pointing arrow)
1. On Node1 (as root) enter the following command to test the SBD resource:
sbd -d SBD_DEVICE message node2 reset
You should see the node2 machine reboot
(End of Exercise)
52
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
This section covers how configure safe logical storage resources with LVM in a cluster.
Version 1
53
In this exercise, you create 4 1GB logical volumes on the shared storage server.
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
Perform this exercise on the Storage1 server (unless otherwise directed by the instructor)
An LVM volume group named vg0 with at least 4GB of free space must exist to
successfully complete this exercise.
1. Use the following commands to create 4 Logical Volumes in the vg0 Volume
Group:
lvcreate
-L
1G
-n
lun0
vg0
lvcreate
-L
1G
-n
lun1
vg0
lvcreate
-L
1G
-n
lun2
vg0
lvcreate
-L
1G
-n
lun3
vg0
2. Verify that the logical volumes were created by entering the following command:
lvs
(End of Exercise)
54
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise you will use the command line iSCSI tools to create a target containing 4
LUNs on the iSCSI target server .
Objectives:
Perform this exercise on the iSCSI target server machine (most likely storage1) as directed.
You must have an LVM volume group named vg0 that contains at least 4 logical volumes to
perform this exercise.
1. In the text editor of your choice, open the /etc/ietd.conf file to be edited (as the
root user).
Lun 0 Path=/dev/vg0/lun0,Type=fileio,ScsiId=4_luns-0
Lun 1 Path=/dev/vg0/lun1,Type=fileio,ScsiId=4_luns-1
Lun 2 Path=/dev/vg0/lun2,Type=fileio,ScsiId=4_luns-2
Lun 3 Path=/dev/vg0/lun3,Type=fileio,ScsiId=4_luns-3
rciscsitarget restart
Version 1
(End of Exercise)
55
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise you will use the CLI tools to discover targets with an iSCSI initiator.
You must have at least one target already configured on the target server to complete this
exercise.
You will need the IP address of the iSCSI Target Server.
TARGET_LAN_IP= _______________________________
TARGET_SAN1_IP=_______________________________
TARGET_SAN2_IP=_______________________________
TARGET_DRBD_IP=_______________________________
TARGET_NAME=______________________________
1. On Node1, open a terminal window and if not already logged in as the root user,
enter su to become root. When prompted for the root user's password, enter
novell
3. Enter the following command to delete the unneeded portals to the target on the
target server from the iSCSI initiator's discovered target database:
iscsiadm -m node -o delete -p TARGET_LAN_IP
56
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
(End of Exercise)
Version 1
57
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure a cloned resource group for the DLM and cLVM daemon
resources using the crm command line.
1. In the text editor of your choice, open the /etc/lvm/lvm.conf file to be edited.
2. Find the locking_type parameter in the global {} section and change the 1 to a 3 as
follows:
locking_type = 3
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Enter the following command to open the crm CLI in configure mode:
crm configure
3. Enter the following commands to create a primitive resource for dlm and clvmd:
primitive dlm ocf:pacemaker:controld \
op monitor intrerval=10 timeout=20
primitive clvm ocf:lvm2:clvmd \
op monitor interval=10 timeout=20
group base_strg_grp dlm clvm
58
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
crm_mon -ri 1
command:
You should see the new cloned resource group listed along with its state
In the crm_mon terminal window you should see the new cloned resource group
listed as started
3. In the crm_mon terminal window, press Ctrl+C to to close the crm_mon utility
(End of Exercise)
Version 1
59
Description:
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise you will use the LVM command line commands to create an LVM volume
group from multiple disks.
Objectives:
Perform this exercise on only one of the cluster nodes (unless as directed otherwise by the
instructor).
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Find and record the list of SAN LUNs that will be used in the clustered LVM
volume by entering the following command at the command line of the one of the
cluster nodes (they are most likely the ones you just connected to in a previous
exercise):
ls -l /dev/disk/by-path
List the persistent device file names of each SAN LUN that will be in the clustered
LVM volume group:
_____________________________________________________
_____________________________________________________
_____________________________________________________
_____________________________________________________
3. For each of the disks that will be used in the volume group (recorded above), use
the following command to create LVM Physical Volume(s) replacing BLOCKDEV
with the block device filename of the disk:
pvcreate /dev/BLOCKDEV
Repeat this command for each of the disks that will be used in the volume group
4. Use the following command to create an LVM Volume Group named cvg0 (list
block devices as a space delimited list):
60
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
6. To see how the physical volumes created are being used, enter the following
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
pvdisplay
(End of Exercise)
Version 1
61
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise, you will create a logical volume in the cvg0 volume group on the cluster
nodes.
Objectives:
Perform this exercise on only one of the cluster nodes (unless directed otherwise by the
instructor).
An LVM volume group named cvg0 with at least 512MB of free space must exist to
successfully complete this exercise.
1. On one of the cluster nodes, if not already logged in as the root user, enter su to
become root. When prompted for the root user's password, enter novell
2. Use the following command to create one logical volume in the cvg0 volume
group:
3. Verify that the logical volume was created by entering the following command:
lvs
Task II: Format the Basic LVM Logical Volume with a File
System
1. On the same cluster node, enter the following command to create a file system in
mkfs.ext3 /dev/cvg0/datavol2
2. Enter the following command to create a mount point for the logical volume and
mount it:
mkdir /data2
Create this directory on all cluster nodes.
(End of Exercise)
62
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Version 1
63
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure a cloned resource for a clustered LVM volume group using
the crm command line.
2. Enter the following command to open the crm CLI in configure mode:
crm configure
3. Enter the following commands to create a clone resource for the to activate the
clustered LVM volume group on all of the cluster nodes:
commit
4. While still at the crm(live)configure# prompt, enter the following to create order
constraints to ensure that the resources start in the appropriate order:
order cvg0_after_base_strg_clone inf: base_strg-clone
cvg0
commit
quit
5. View the resource state from the command line by entering the following
64
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
1. Open another terminal window. If not already logged in as the root user, use su
2. Enter the following command to start the new resource:
crm resource start cvg0
In the crm_mon terminal window you should see the new resource listed as started
3. In the crm_mon terminal window, press Ctrl+C to to close the crm_mon utility
(End of Exercise)
Version 1
65
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure a resource that mounts a filesystem that resides on an LVM
volume in a clustered LVM volume group using the crm command line.
You must have a clustered LVM volume group named cvg0 to perform this exercise.
Use the following values in this exercise:
DEVICE=_________________________________________
DIRECTORY=_____________________________________
FSTYPE=_________________________________________
2. Enter the following command to open the crm CLI in configure mode:
crm configure
3. Enter the following commands to create a clone resource for the to activate the
clustered LVM volume group on all of the cluster nodes:
meta target-role=stopped
commit
4. While still at the crm(live)configure# prompt, enter the following to create order
constraints to ensure that the resources start in the appropriate order:
order data2vol_after_cvg0 inf: cgv0 data2vol
commit
quit
66
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
1. Open another terminal window. If not already logged in as the root user, use su
to become root using the same password as above.
In the crm_mon terminal window you should see the new resource listed as started
3. In the crm_mon terminal window, press Ctrl+C to to close the crm_mon utility
(End of Exercise)
Version 1
67
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
This section covers how configure cluster safe active/active storage resources with OCFS2
in a cluster.
68
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
In this exercise, you will create a 8GB logical volume in the vg0 volume group.
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Use the following commands to create a logical volume in the vg0 Volume Group:
lvcreate
-L
8G
-n
ocfs2
vg0
3. Verify that the logical volumes were created by entering the following command:
lvs
(End of Exercise)
Version 1
69
8.2 Create an iSCSI Target with 1 LUN for OCFS2 from the
CLI
Description:
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
In this exercise you will use the command line iSCSI tools to create a target containing 1
LUN on the iSCSI target server .
Objectives:
Perform this exercise on the iSCSI target server machine (most likely storage1) as directed.
You must have an LVM volume group named vg0 that contains at least 1 logical volume
named ocfs2 to perform this exercise.
1. In the text editor of your choice, open the /etc/ietd.conf file to be edited (as the
root user).
Lun 0 Path=/dev/vg0/ocfs2,Type=fileio,ScsiId=ocfs2-0
(End of Exercise)
70
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise you will use the CLI tools to discover targets with an iSCSI initiator.
You must have at least one target already configured on the target server to complete this
exercise.
You will need the IP address of the iSCSI Target Server.
TARGET_LAN_IP= _______________________________
TARGET_SAN1_IP=_______________________________
TARGET_SAN2_IP=_______________________________
TARGET_DRBD_IP=_______________________________
TARGET_NAME=______________________________
1. On Node1, open a terminal window and if not already logged in as the root user,
enter su to become root. When prompted for the root user's password, enter
novell
3. Enter the following command to delete the unneeded portals to the target on the
target server from the iSCSI initiator's discovered target database:
iscsiadm -m node -o delete -p TARGET_LAN_IP
Version 1
71
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
(End of Exercise)
72
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
WARNING: Before you perform this step, if you have the SBD STONITH
daemon configured and it is using the watchdog device, because of the limitations
of the lab environment, you may need to disable the watchdog device and stop the
SBD cluster resource to prevent the cluster node(s) from rebooting unintentionally
due to temporary lack of SAN communication.
2. Enter the following command to create the mount point directory for the OCFS2
volume:
mkdir -p MOUNT_POINT
(End of Exercise)
Version 1
73
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure a cloned resource group for the DLM, cLVM daemon, and
O2CB resources using the crm command line.
1. In the text editor of your choice, open the /etc/lvm/lvm.conf file to be edited.
2. Find the locking_type parameter in the global {} section and change the 1 to a 3 as
follows:
locking_type = 3
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Enter the following command to open the crm CLI in configure mode:
crm configure
74
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
quit
4. View the resource state from the command line by entering the following
command:
crm_mon -ri 1
You should see the new cloned resource group listed along with its state
In the crm_mon terminal window you should see the new cloned resource group
listed as started
3. In the crm_mon terminal window, press Ctrl+C to to close the crm_mon utility
(End of Exercise)
Version 1
75
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure an OCFS2 volume to be a cluster managed clone resource
using the crm command line.
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Enter the following command to open the crm CLI in configure mode:
crm configure
4. View the resource state from the command line by entering the following
command:
crm_mon -ri 1
You should see the new clone resource listed along with its state
76
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
(End of Exercise)
3. In the crm_mon terminal window, press Ctrl+C to to close the crm_mon utility
Version 1
77
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Objectives:
In this exercise, you configure an order constraint using the crm command line.
1. If not already logged in as the root user, enter su to become root. When
prompted for the root user's password, enter novell
2. Enter the following command to open the crm CLI in configure mode:
crm configure
3. Enter the following commands to create an order constraint for HASI and the
OCFS2 volume:
ocfs2vol_clone
commit
4. While still at the crm(live)configure# prompt, enter the following command to see
the new order constraint:
show
You should see the new order constraint. (You might need to arrow down to see the
constraint if you have a lot of resources. Press q to close the built-in pager if you
5. Enter the following command at the crm(live)configure# prompt to exit the crm
CLI:
quit
You should now be back at a shell prompt
(End of Exercise)
78
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
Version 1
79
ov
el
lA
N TT
ot I
fo ns
r d tru
is ct
tri or
bu U
tio se
n O
nl
80
Copying all or part of this manual, or distributing such copies, is strictly prohibited.
To report suspected copying, please call 1-800-PIRATES
Version 1