You are on page 1of 35

Installations.

Introduction to NPACI Rocks


NPACI Rocks Cluster Distribution Project by
National Partnership for Advance Computational Infrastructure (NPACI) San Diego Supercomputer Center (SDSC)

http://rocks.npaci.edu/or http://www.rocksclusters.org/ Latest release version is 4.0.0


2

NPACI Rocks Cluster Distribution


Goal: make clusters easy. Uses an SQL database to store the definitions of these global configurations Supported Hardware
Processors
x86 (ia32, AMD, etc.) IA-64 (Itanium, McKinley, etc.) x86_64 (AMD Opteron)

Rocks base on CentOS 4.0 (RHEL 4.0)


3

Physical Assembly
Frontend
Nodes of this type are exposed to the outside world. Many services (NFS, NIS, DHCP, NTP, MySQL, HTTP, ...) run on these nodes.

Compute
These are the workhorse nodes.

Ethernet Network
All compute nodes are connected with ethernet on the private network. The every machine in cluster called node
4

The Rocks cluster architecture

Strictly

The Rocks & Rolls


The Rocks & Rolls Contained :
Boot Roll
compute grid viz jumbo

1 Disc.

(Required)

bare bones base hpc kernel area51 base ganglia hpc java kernel sge [recommended] area51 base ganglia grid hpc java kernel sge area51 base ganglia hpc java kernel viz area51 base ganglia grid hpc java kernel myrinet os sge viz

OS Roll

4 Disc

(Required Disc 1,2)


6

The Rocks & Rolls (contd)


Rocks Base
The linux system and standard cluster tools.

HPC Roll
High Performance Computing Contained more library for high performance computing such as MPI (Message Passing Interface) library for parallel programming.

The Rocks & Rolls (contd)


SGE (Sun Grid Engine)
SGE is a distributed resource management software

Grid Roll
uses the NSF Middleware Initiative (NMI) Release 3.1 to provide Globus connectivity

The Rocks & Rolls (contd)


Intel Roll
install and configure the Intel C compiler (version 8.0) and the Intel Fortran compiler (version 8.0) for x86 or IA-64 machines.

The PBS Roll


installs and configures the Open Portable Batch System scheduler.
9

Minimum Hardware Requirements


Frontend Node
Disk Capacity: 16 GB Memory Capacity: 512 MB Ethernet: 2 physical ports (e.g., "eth0" and "eth1")

Compute Node
Disk Capacity: 16 GB Memory Capacity: 512 MB Ethernet: 1 physical port (e.g., "eth0")
10

Installation Preparing
In our workshop, We will Install the pack of rocks consist of: Boot Roll (compute roll)
OS Roll Disc 1 OS Roll Disc 2 Grid Rolls

Check it, Your cluster and check it your self.


11

Rocks Installation.
Pick the Disc, The Rocks base disc 1 and HPC Roll. Insert the Rocks Base CD into your frontend machine and reset the frontend machine. After the frontend boots off the CD, you will see the boot screen:

12

Rocks installation 1

When you see the screen above, type: frontend


13

you'll see a screen that looks like:


14

Rocks installation 2

After the CD/DVD drive ejects, Put the OS - Disk 1 Roll CD and select 'Ok'
15

Rocks installation 3

If you have not another roll choose No.


16

Rocks installation 4

Fill information of your cluster.


17

Rocks installation 5

Automatic is the default


18

Automatic partitioning
Partition Name / swap 6 GB 1 GB Size

/export (symbolically linked remainder of root disk to /state/partition1)

19

Rocks installation 6 eth0 for private network.

It is recommended that you accept the defaults.


20

Rocks installation 7 eth1 for Public network

Set up the networking parameters for connected to the outside network. 21

Rocks installation 8

Configure the Gateway and DNS.


22

Rocks installation 9

Configure the time


23

Rocks installation 10

Input the root password


24

Rocks installation 11

example screen above, insert the Roll into the drive and select 'Ok'

25

Rocks installation 12

the packages will be installed.


26

Rocks installation 13
Then the installer will ask for each of the roll CDs you added at the beginning of the frontend installation. Put the appropriate roll CD in the drive when prompted and hit 'Ok'. After the last roll CD is installed, the machine will reboot.

27

Rocks installation 14

In the First time of logging into the frontend, When the system ask you for pass phrase of SSH, Essentially! press Enter for pass it. Remember not fill any data!!!

28

Install compute node


Compute node installation
1. Login to the frontend node as root. 2. Run a program which captures compute node DHCP requests and puts their information into the Rocks MySQL database:
# insert-ethers

you will see next screen:


29

Install compute node (contd)


Take the Rocks base disc 1 and put it in your first compute node If you don't have a CD drive in your compute nodes, you can use PXE (Network Boot).

30

Install compute node (contd)


When the frontend machine receives the DHCP request from the compute node will be displayed for a few seconds and then you'll see the following:

31

Install compute node (contd)


Press F1 for exit of insert-ethers Default name of compute node is compute-X-X The first digit call cabinet. It same the cluster group number. For install another cabinet you can restart the insert-ethers again and use option cabinet
# insert-ethers --cabinet=1

If you use before command you can get compute node in cabinet compute-1-X
32

Remove a compute node from the cluster


Command
# insert-ethers --remove="[your compute node name]"

Example : remove compute-0-1


# insert-ethers --remove="compute-0-1"

33

cluster-fork Command.
cluster-fork is a command for distribute another command to all node in the cluster Example
#cluster-fork poweroff

For an example the poweroff command is distributed to all compute node in cluster. And let few minute all compute node is shatdown
34

The End

You might also like