Professional Documents
Culture Documents
Foundation 2.1
26-Aug-2015
Notice
Copyright
Copyright 2015 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention
Description
variable_value
ncli> command
user@host$ command
root@host# command
The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command
output
Target
Username
Password
Nutanix Controller VM
admin
admin
vSphere client
ESXi host
root
nutanix/4u
Interface
Target
Username
Password
ESXi host
root
nutanix/4u
Acropolis host
root
nutanix/4u
Hyper-V host
Administrator
nutanix/4u
SSH client
Nutanix Controller VM
nutanix
nutanix/4u
Version
Last modified: August 26, 2015 (2015-08-26 16:25:24 GMT-7)
Contents
Release Notes................................................................................................................... 5
Imaging Nodes..................................................................................................................................... 7
Summary: Imaging a Cluster.................................................................................................... 7
Summary: Imaging a Node.......................................................................................................8
Supported Hypervisors.........................................................................................................................8
Preparing a Workstation...................................................................................................................... 9
Setting Up the Network..................................................................................................................... 13
3: Imaging a Cluster.................................................................................. 15
4: Imaging a Node..................................................................................... 29
Installing a Hypervisor....................................................................................................................... 29
Installing ESXi......................................................................................................................... 32
Installing Hyper-V.................................................................................................................... 33
Installing KVM......................................................................................................................... 37
Installing the Controller VM............................................................................................................... 44
Foundation Files.................................................................................................................................50
Phoenix Files......................................................................................................................................51
Release Notes
Foundation Release 2.1.2
This release includes the following enhancements and changes:
Added support for the NX-6035C. Foundation will image the NX-6035C using KVM in parallel with ESXi
imaging on other nodes.
Foundation now installs NOS using the upgrade tarball rather than a Phoenix ISO, and it supports any
NOS version greater than 3.5. You can download the Foundation files and the NOS tarball from the
support portal (see Downloading Installation Files on page 48).
Foundation now installs KVM using a Nutanix KVM ISO. The ISO is included in Foundation by default
at /home/nutanix/foundation/isos/hypervisor/kvm, and future updates will be made available on the
support portal.
Foundation supports KVM on NOS version 4.1 or later; it does not support KVM on earlier NOS
versions.
Previous versions of Foundation used a 24 GB virtual disk, but Foundation 2.1 uses a 30 GB virtual
disk.
Release Notes | Field Installation Guide | Foundation | 5
There is a new procedure when using Phoenix to install KVM on a node (see Installing a Hypervisor on
page 29).
Because Foundation 2.1 includes a new OVF file, it is recommended that all users install Foundation
2.1 from scratch (see Preparing a Workstation on page 9). However, it is possible to upgrade
to version 2.1 from version 2.0.x using the following steps. (Upgrading from a pre-2.0 version is not
supported.)
1. Copy the Foundation tarball (foundation-version#.tar.gz) from the support portal to /home/nutanix
in your VM.
2. Navigate to /home/nutanix.
3. Enter the following five commands:
$
$
$
$
$
If the first command (foundation_service stop) is skipped or the commands are not run in order, the
user may get bizarre errors after upgrading. To fix this situation, enter the following two commands:
$ sudo pkill -9 foundation
$ sudo service foundation_service restart
1
Field Installation Overview
Nutanix installs the KVM hypervisor and the Nutanix Operating System (NOS) Controller VM at the factory
before shipping a node to a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes
or to use any hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides
step-by-step instructions on how to image nodes (install a hypervisor and then the NOS Controller VM)
after they have been physically installed at a site and configure the nodes into one or more clusters.
Note: Only Nutanix sales engineers, support engineers, and partners are authorized to perform
a field installation. Field installation can be used to cleanly install new nodes (blocks) in a cluster
or to install a different hypervisor on a single node. It should not be used to upgrade the
hypervisor or switch hypervisors of nodes in an existing cluster. (You can use the Foundation
tool to re-image nodes in an existing cluster that you no longer want by first destroying the cluster.)
Imaging Nodes
A field installation can be performed for a cluster (multiple nodes that can be configured as one or more
clusters) or a single node.
Supported Hypervisors
Foundation supports imaging an ESXi, Hyper-V, or KVM hypervisor on any Nutanix hardware model except
for the following:
Foundation does not support the NX-2000 and NX-3000 series. (This refers to the original NX-3000
series only. The NX-3050/3060 series is supported.)
Hyper-V requires a 64 GB DOM.
NX-7000 series:
ESXi version 5.1 or later is supported; earlier ESXi versions are not supported.
Hyper-V standard and datacenter versions are supported; the free version is not supported.
Note: See Hypervisor ISO Images on page 52 for a list of supported ESXi and Hyper-V
versions.
2
Preparing Installation Environment
Imaging is performed from a workstation with access to the IPMI interfaces of the nodes in the cluster.
Imaging a cluster in the field requires first installing certain tools on the workstation and then setting the
environment to run those tools. This requires two preparation tasks:
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to
installation. This includes downloading ISO images, installing Oracle VM VirtualBox, and using
VirtualBox to configure various parameters on the Foundation VM (see Preparing a Workstation on
page 9).
2. Set up the network. The nodes and workstation must have network access to each other through a
switch at the site (see Setting Up the Network on page 13).
Preparing a Workstation
A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the
following:
Note: You can perform these steps either before going to the installation site (if you use a portable
laptop) or at the site (if you can connect to the web).
1. Get a workstation (laptop or desktop computer) that you can use for the installation.
The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk
space (preferably SSD), and a physical (wired) network adapter.
2. Go to the Foundation and NOS download pages in the Nutanix support portal (see Downloading
Installation Files on page 48) and download the following files to a temporary directory on the
workstation.
Foundation_VM-version#.ovf. This is the Foundation VM OVF configuration file for the version#
release, for example Foundation_VM-2.1.ovf.
VirtualBox is a free open source tool used to create a virtualized environment on the workstation.
Note: Links to the VirtualBox files may not appear on the download page for every
Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.)
nutanix_installer_package-version#.tar.gz. This is the tarball used for imaging the desired NOS
release. Go to the NOS Releases download page on the support portal to download this file. (You
can download all the other files from the Foundation download page.)
Note: This assumes the tar command is available. If it is not, use the corresponding tar utility
for your environment.
4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.
See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).
Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment.
Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM
VirtualBox.
5. Create a new folder called VirtualBox VMs in your home directory.
On a Windows system this is typically C:\Users\user_name\VirtualBox VMs.
6. Copy the Foundation_VM-version#.ovf and Foundation_VM-version#-disk1.vmdk files to the VirtualBox
VMs folder that you created in step 5.
7. Start Oracle VM VirtualBox.
a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. Click OK when prompted to Open Autorun Prompt and then click Run.
c. Enter the root password (nutanix/4u) and then click Authenticate.
d. After the installation is complete, press the return key to close the VirtualBox Guest Additions
installation window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.
f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.
Note: A reboot is necessary for the changes to take effect.
g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on
the VirtualBox window for the Foundation VM.
14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to
get an IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as
follows:
Note: Normally, the Foundation VM needs to be on a public network in order to copy selected
ISO files to the Foundation VM in the next two steps. This might require setting a static IP
address now and setting it again when the workstation is on a different (typically private)
network for the installation (see Imaging a Cluster on page 15).
a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.
Note: Unlike Nutanix systems, which only require that you connect the 1 GbE port, Dell
systems require that you connect both the iDRAC port (which is used instead of an IPMI port)
and one of the 1 GbE ports.
3
Imaging a Cluster
This procedure describes how to install a selected hypervisor and the NOS Controller VM on multiple new
nodes and optionally configure the nodes into one or more clusters.
Before you begin:
Physically install the Nutanix cluster at your site. See the Physical Installation Guide for your model type
for installation instructions.
Set up the installation environment (see Preparing Installation Environment on page 9).
Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will
get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in
the BIOS.
Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the
imaging process. Therefore, disable STP before starting Foundation.
Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents
virtual media, such as CDROM. This could conflict with the foundation installation when it tries
to mount the virtual CDROM hosting the install ISO.
Have ready the appropriate global, node, and cluster parameter values needed for installation.
Note: If the Foundation VM IP address set previously was configured in one (typically public)
network environment and you are imaging the cluster on a different (typically private) network
in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on
page 9 to configure a new static IP address for the Foundation VM.
The Global Configuration screen appears. Use this screen to configure network addresses.
Note: You can access help from the gear icon
pull-down menu (top right), but this
requires Internet access. If necessary, copy the help URL to a browser with Internet access.
Enter unique IPMI, hypervisor, and Controller VM IP addresses. Make sure that the addresses
match the subnets specified for the nodes to be imaged (see Configuring Node Parameters on
page 18).
If this box is not checked, Foundation requires that either all IP addresses are on the same subnet or
that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.
Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a
preconfigured block with an existing cluster and you want Foundation to image those nodes, you
must first destroy the existing cluster in order for Foundation to discover those nodes.
You can exclude a block by clicking the X on the far right of that block. The block disappears from
the display, and the nodes in that block will not be imaged. Clicking the X on the top line removes all
the displayed blocks.
To repeat the discovery process (search for unconfigured nodes again), click the Retry Discovery
button. You can reset all the global and node entries to the default state by selecting Reset
Configuration from the gear icon
pull-down menu.
2. To image additional (bare metal) nodes, click the Add Blocks button.
A window appears to add a new block. Do the following in the indicated fields:
To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP
address in that field.
To specify the IPMI addresses automatically, enter a starting IP address in the top line ("Start
IP address" field) of the IPMI IP column. The entered address is assigned to the IPMI port of
the first node, and consecutive IP addresses (starting from the entered address) are assigned
automatically to the remaining nodes. Discovered nodes are sorted first by block ID and then by
position, so IP assignments are sequential. If you do not want all addresses to be consecutive,
you can change the IP address for specific nodes by updating the address in the appropriate
fields for those nodes.
Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255
because such addresses are commonly reserved by network administrators.
A host name is automatically generated for each host ( NTNX-unique_identifier). If these names
are acceptable, do nothing in this field.
Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The
automatically generated names might be longer than 15 characters, which would result
in the same truncated name for multiple hosts in a Windows environment. Therefore, do
not use automatically generated names longer than 15 characters when the hypervisor is
Hyper-V.
To specify the host names manually, go to the line for each node and enter the desired name in
that field.
To specify the host names automatically, enter a base name in the top line of the Hypervisor
Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first
node, and the base name with "-2", "-3" and so on are assigned automatically as the host names
of the remaining nodes. You can specify different names for selected nodes by updating the entry
in the appropriate field for those nodes.
h. NX-6035C : Check this box for any node that is a model NX-6035C.
Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs
are not allowed. NX-6035C nodes run KVM (and so will be imaged with KVM) regardless of what
hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 22).
4. To check which IP addresses are active and reachable, click Ping Scan (above the table on the right).
This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP fields. A
(returned
response) or
(no response) icon appears next to that field to indicate the ping test result for each
node. This feature is most useful when imaging a previously unconfigured set of nodes. None of
the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing
infrastructure.
Note: When re-imaging a configured set of nodes using the same network configuration, failure
to ping indicates a networking issue.
5. Click the Next button at the bottom of the screen to select the images to use (see Configuring Image
Parameters on page 22).
1. Select the hypervisor to install from the pull-down list on the left.
The following choices are available:
ESX. Selecting ESX as the hypervisor displays the NOS Package and Hypervisor ISO Image fields
directly below.
Hyper-V. Selecting Hyper-V as the hypervisor displays the NOS Package, Hypervisor ISO Image,
and SKU fields.
Caution: Nodes must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on
nodes with less DOM capacity will fail.
KVM. Selecting KVM as the hypervisor displays the NOS Package and Hypervisor ISO Image fields.
2. In the NOS Package field, select the NOS package to use from the pull-down list.
Note: Click the Refresh NOS package link to display the current list of available images in
the ~/foundation/nos folder. If the desired NOS package does not appear in the list, you must
download it to the workstation (see Preparing Installation Environment on page 9).
3. In the Hypervisor ISO Image field, select the hypervisor ISO image to use from the pull-down list.
Note: Click the Refresh hypervisor image link to display the current list of available images
in the ~/foundation/isos/hypervisor/[esx|hyperv] folder. If the desired hypervisor ISO image
does not appear in the list, you must download it to the workstation (see Preparing Installation
Environment on page 9). Foundation automatically provides an ISO for KVM imaging in the ~/
foundation/isos/hypervisor/kvm folder.
4. [Hyper-V only] In the SKU field, select the Hyper-V version to use from the pull-down list.
Three Hyper-V versions are supported: free, standard, and datacenter. This field appears only when
you select Hyper-V.
5. When all the settings are correct, do one of the following:
To create a new cluster, click the Next button at the bottom of the screen (see Configuring Cluster
Parameters on page 23).
To start imaging immediately (bypassing cluster configuration), click the Run Installation button at
the top of the screen (see Monitoring Progress on page 25).
1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster in the
Cluster Creation section at the top of the screen.
This section includes a table that is empty initially. A blank line appears in the table for the new cluster.
Enter the following information in the indicated fields:
a. Cluster Name: Enter a cluster name.
b. External IP: Enter an external (virtual) IP address for the cluster.
This field sets a logical IP address that always points to an active Controller VM (provided the cluster
is up), which removes the need to enter the address of a specific Controller VM. This parameter is
required for Hyper-V clusters and is optional for ESXi and KVM clusters. (This applies to NOS 4.0 or
later; it is ignored when imaging an earlier NOS release.)
c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL.
Enter a comma separated list to specify multiple server addresses in this field (and the next two
fields).
d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL.
You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable
or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start.
Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active
Directory domain controller.
e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL.
Imaging a Cluster | Field Installation Guide | Foundation | 24
f. Max Redundancy Factor: Select a redundancy factor (2 or 3) for the cluster from the pull-down list.
This parameter specifies the number of times each piece of data is replicated in the cluster (either 2
or 3 copies). It sets how many simultaneous node failures the cluster can tolerate and the minimum
number of nodes required to support that protection.
Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of
any single node or drive.
Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure
of any two nodes or drives in different blocks. A redundancy factor of 3 requires that the cluster
have at least five nodes, and it can be enabled only when the cluster is created. It is an option on
NOS release 4.0 or later. (In addition, containers must have replication factor 3 for guest VM data
to withstand the failure of two nodes.)
2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in
the Post Image Testing section.
Check the Diagnostics box to run a diagnostic utility on the cluster. The diagnostic utility analyzes
several performance metrics on each node in the cluster. These metrics indicate whether the cluster
is performing properly. The results are stored in the ~/foundation/logs/diagnostics directory.
Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of
tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/
logs/ncc directory. (This test is available on NOS 4.0 or later. Checking the box does nothing on an
earlier NOS release.)
3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes
field to be included in that cluster.
A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes
to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot
be assigned to more than one cluster.
Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to
add to an existing cluster, which can be done through the web console or nCLI at a later time.
4. When all settings are correct, click the Run Installation button at the top of the screen to start the
installation process (see Monitoring Progress on page 25).
Monitoring Progress
Before you begin: Complete Configuring Cluster Parameters on page 23 (or Configuring Image
Parameters on page 22 if you are not creating a cluster).
Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to
see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect
the latest features described in this section.)
When all the global, node, and cluster settings are correct, do the following:
1. Click the Run Installation button at the top of the screen.
Figure: Run Installation Button
This starts the installation process. First, the IPMI port addresses are configured. The IPMI port
configuration processing can take several minutes depending on the size of the cluster.
Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation
process stops before imaging any of the nodes. To correct a port configuration problem, see
Fixing IPMI Configuration Problems on page 56.
2. Monitor the imaging and cluster creation progress.
If IPMI port addressing is successful, Foundation moves to node imaging and displays a progress
screen. The progress screen includes the following sections:
Progress bar at the top (blue during normal processing or red when there is a problem).
Cluster Creation Status section with a line for each cluster being created (status indicator, cluster
name, progress message, and log link).
Node Status section with a line for each node being imaged (status indicator, IPMI IP address,
progress message, and log link).
The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. Nodes are imaged in parallel, and the imaging process takes about 45
minutes. You can monitor overall progress by clicking the Log link at the top, which displays the
service.log contents in a separate tab or window. Click on the Log link for a node to display the log file
for that node in a separate tab or window.
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.
When installation moves to cluster creation, the status message for each cluster (in the Cluster
Creation Status section) displays the percentage complete and current step. Cluster creation
happens quickly, but this step could take some time if you selected the diagnostic and NCC postcreation tests. Click on the Log link for a cluster to display the log file for that cluster in a separate
tab or window.
When processing completes successfully, an "Installation Complete" message appears, along with a
green check mark in the Status field for each node and cluster. This means IPMI configuration and
imaging (both hypervisor and NOS Controller VM) across all the nodes in the cluster was successful,
and cluster creation was successful (if enabled).
4
Imaging a Node
This procedure describes how to install the NOS Controller VM and selected hypervisor on a new or
replacement node from an ISO image on a workstation (laptop or desktop machine).
Before you begin: If you are adding a new node, physically install that node at your site. See the Physical
Installation Guide for your model type for installation instructions.
Note: You can use Foundation to image a single node (see Imaging a Cluster on page 15),
so a separate procedure is not necessary or recommended. However, the following procedure
describes how to image a single node if you decide not to use Foundation.
Imaging a new or replacement node can be done either through the IPMI interface (network connection
required) or through a direct attached USB (no network connection required). In either case the installation
is divided into two steps:
1. Install the desired hypervisor version (see Installing a Hypervisor on page 29).
2. Install the NOS Controller VM and provision the hypervisor (see Installing the Controller VM on
page 44).
Installing a Hypervisor
This procedure describes how to install a hypervisor on a single node in a cluster in the field.
Caution: The node must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on a
node with less DOM capacity will fail.
To install a hypervisor on a new or replacement node in the field, do the following:
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
1. Verify you have access to the IPMI interface for the node.
a. Connect the IPMI port on that node to the network if it is not already connected.
A 1 or 10 GbE port connection is not required for imaging the node.
b. Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.
To assign a static address, see Setting IPMI Static IP Address on page 54.
2. Download the NOS (nutanix_installer_package-version#.tar.gz) and Foundation
(foundation-version#.tar.gz) tarballs from the Nutanix support portal (see Downloading Installation
Files on page 48) to the /home/nutanix directory on the workstation. (Create this directory if it does
not exist currently.) If installing ESXi or Hyper-V, also download an ESXi or Hyper-V ISO image.
Customers must provide the ESXi or Hyper-V ISO image. See Hypervisor ISO Images on page 52
for a list of supported ESXi and Hyper-V ISO images. The Foundation tarball (after unpacking in step 4)
provides a KVM ISO located in /home/nutanix/foundation/isos/hypervisor/kvm.
Note: If Foundation 2.1 or later is installed on the workstation currently, it is not necessary to
download foundation-version#.tar.gz.
3. If Foundation 2.1 or later is installed, skip to the next step. Otherwise, do the following:
$ cd /home/nutanix
$ rm -rf foundation
$ tar xzf foundation-version#.tar.gz
This removes the foundation directory if it is present and extracts the Foundation tarball (including a
new foundation directory).
4. Enter the following commands from /home/nutanix:
$ sudo pkill -9 foundation
$ gunzip nutanix_installer_package-version#.tar.gz
This kills the Foundation service if it is running and unpacks the NOS tarball.
Note: If either the tar or gunzip command is not available, use the corresponding tar or
gunzip utility for your environment.
5. Create the Phoenix ISO by entering the following commands:
$ cd /home/nutanix/foundation
$ ./foundation --generate_phoenix --nos_package=nutanix_installer_package-version#.tar
13. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.
This causes the system to reboot using the selected hypervisor image.
Installing ESXi
Before you begin: Complete Installing a Hypervisor on page 29.
1. Click Continue at the installation screen and then accept the end user license agreement on the next
screen.
3. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.
4. In the root password screen, enter nutanix/4u as the root password.
Note: The root password must be nutanix/4u or the installation will fail.
5. Review the information on the Install Confirm screen and then click Install.
Installing Hyper-V
Before you begin: Complete Installing a Hypervisor on page 29.
1. Start the installation.
a. Press any key when the Press any key to boot from CD or DVD prompt appears.
b. Select Windows Setup [EMS Enabled] in the Windows Boot Manager screen.
c. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk
and then run the clean command:
select disk number
clean
d. Create and format a primary partition (size 1024 and file system fat32).
create partition primary size=1024
select partition 1
format fs=fat32 quick
e. Create and format a second primary partition (default size and file system ntfs).
create partition primary
select partition 2
format fs=ntfs quick
f. Assign the drive letter "C" to the DOM install partition volume.
list volume
list partition
This displays a table of logical volumes and their associated drive letter, size, and file system type.
Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which
is the DOM install partition) is drive letter "C", go to the next step.
Otherwise, do one of the following:
If drive letter "C" is assigned currently to another volume, enter the following commands to
remove the current "C" drive volume and reassign "C" to the DOM install partition volume:
select volume cdrive_volume_id#
remove
select volume dom_install_volume_id#
assign letter=c
If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the
DOM install partition volume:
select volume dom_install_volume_id#
assign letter=c
b. In the language selection screen that reappears, again just click the Next button.
c. In the install screen that reappears click the Install now button.
d. In the operating system screen, select Windows Server 2012 Datacenter (Server Core
Installation) and then click the Next button.
This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To monitor
progress, log into the VM after the initial reboot and enter the command notepad C:\Program Files
\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this command
as desired to see an updated version of the log file.
Note: A d:\firstboot_fail file appears when this process fails. If that file is not present, the
process is continuing (if slowly).
Installing KVM
Before you begin: Complete Installing a Hypervisor on page 29.
1. Select Install or Upgrade an Existing System in the welcome screen and then press the Enter key.
The installation begins and progress tracking appears (an installation progress bar and then a listing of
packages as they are installed. Installation can take several minutes.
13. When the Installation Complete screen appears, go back to the Virtual Storage screen, click the Plug
Out button, and then return to the Installation Complete screen and click Reboot.
If you are imaging a U-node, select both Configure Hypervisor (to provision the hypervisor) and
Clean CVM (to install the Controller VM).
Note: You must select both to install the Controller VM; selecting Clean CVM by itself
will fail.
If you are imaging an X-node, select Configure Hypervisor only. This provisions the hypervisor
without installing a new Controller VM.
If you are instructed to do so by Nutanix customer support, select Repair CVM. This option is for
repairing certain problem conditions. Ignore this option unless Nutanix customer support instructs
you to select it.
Nutanix ships two types of nodes from the factory, a U-node and an X-node. The U-node is fully
populated with disks and other components. This is the node type shipped from the factory when
you are adding a new node to a cluster. Both the hypervisor and Controller VM must be installed in a
new U-node. In contrast, an X-node does not contain disks or a NIC card. This is the node type that
is shipped from the factory when you need to replace an existing node because it has a hardware
failure or related problem (RMA request). In this case you transfer the disks and NIC from the old
node to the X-node, and then install the hypervisor only (not the Controller VM).
Caution: Do not select Clean CVM if you are replacing a node (X-node) because this
option cleans the disks as part of the process, which means existing data will be lost.
c. When all the fields are correct, click the Start button.
The node restarts with the new image. After the node starts, additional configuration tasks run and
then the host restarts again. Wait until this stage completes (typically 15-30 minutes depending on the
hypervisor) before accessing the node.
Caution: Do not restart the host until the configuration is complete.
5
Downloading Installation Files
Nutanix maintains a support portal where you can download the Foundation and NOS (or Phoenix) files
required to do a field installation. To access the Nutanix support portal, do the following:
1. Open a web browser and go to http://portal.nutanix.com.
The login page is displayed.
2. Enter your support portal credentials to access the site.
3. In the initial screen, click Downloads from the main menu at the top and then select Foundation to
download Foundation-related files (or Phoenix to download Phoenix-related files).
Typically, previous Foundation (or Phoenix) releases are removed from the portal when a newer
version is released. However, if an earlier release is still available, you can display the files for that
release by selecting the release number from the first pull-down list.
[Phoenix only] Select the desired hypervisor type (KVM, ESXi, or HyperV) from the second pulldown list.
Clicking the Download version# button in the upper right of the screen downloads the latest
NOS release. You can download an earlier NOS release by clicking the appropriate Download
version# link under the ADDITIONAL RELEASES heading. The tarball to download is named
nutanix_installer_package-version#.tar.gz.
Foundation Files
The following table describes the files required to install Foundation. Use the latest Foundation version
available unless instructed by Nutanix customer support to use an earlier version.
File Name
Description
VirtualBox-version#-OSX.dmg
VirtualBox-version#-Win.exe
Foundation_VM-version#.ovf
Foundation_VM-version#-disk1.vmdk
Foundation_VM_OVF-version#.tar
Foundation-version#.tar.gz
File Name
Description
nutanix_installer_package-version#.tar.gz
Phoenix Files
The following table describes the Phoenix ISO files.
Note: Starting with release 2.1, Foundation no longer uses a Phoenix ISO file for imaging.
Phoenix ISO files are now used only for single node imaging (see Imaging a Node on page 29)
and are generated by the user from Foundation and NOS tarballs. The Phoenix ISOs available on
the support portal are only for those who are using an older version of Foundation (pre 2.1).
File Name
Description
phoenix-x.x_ESX_NOS-y.y.y.iso
phoenix-x.x_HYPERV_NOS-y.y.y.iso
phoenix-x.x_KVM_NOS-y.y.y.iso
6
Hypervisor ISO Images
A KVM ISO image is included as part of Foundation. However, customers must provide an ESXi or HyperV ISO image for those hypervisors. Check with your VMware or Microsoft representative, or download an
ISO image from an appropriate VMware or Microsoft support site:
The following tables list the supported ESXi and Hyper-V hypervisor images.
Note: These are the ISO images supported in Foundation, but some might no longer be available
from the download sites.
File Name
MD5 Sum
5.0 U2
VMware-VMvisorInstaller-5.0.0.update02-914586.x86_64.iso
fa6a00a3f0dd0cd1a677f69a236611e2
5.0 U3
VMware-VMvisorInstaller-5.0.0.update03-1311175.x86_64.iso
391496b995db6d0cf27f0cf79927eca6
5.1 U1
VMware-VMvisorInstaller-5.1.0.update01-1065491.x86_64.iso
2cd15e433aaacc7638c706e013dd673a
5.1 U2
VMware-VMvisorInstaller-5.1.0.update02-1483097.x86_64.iso
6730d6085466c513c04e74a2c2e59dc8
5.1 U3
VMware-VMvisorInstaller-5.1.0.update03-2323236.x86_64.iso
3283ae6f5c82a8204442bd6ec38197b9
5.5
VMware-VMvisorInstaller-5.5.0-1331820.x86_64.iso
9aaa9e0daa424a7021c7dc13db7b9409
5.5 U2a
VMware-VMvisorInstaller-201410001-2143827.x86_64.iso
e7d63c6402d179af830b4c887ce2b872
5.5 U2d
VMware-VMvisorInstaller-201501001-2403361.x86_64.iso
1e0e128e678af54657e6bd3b5bf5f124
6.0
VMware-VMvisorInstaller-6.0.0-2494585.x86_64.iso
478e2c6f7a875dd3dacaaeb2b0b38228
Version
File Name
MD5 Sum
5.5 (Dell
custom)
VMware-VMvisorInstaller-5.5.0-1331820.x86_64Dell_Customized_A00.iso
b9661e44c791b86caf60f179b857a17d
5.5 U2 (Dell
custom)
VMware-VMvisorInstaller-5.5.0.update02-2068190.x86_64Dell_Customized-A00.iso
02887b626eaabb7d933e2a3fa580f1bc
SKUs
Source
Site
File Name
MD5 Sum
fb101ed6d7328aca6473158006630a9d
b52450dd5ba8007e2934f5c6e6eda0ce
(SHA1: A73FC07C1B9F560F960F1
C4A5857FAC062041235)
Windows datacenter EA
Server
standard portal
2012 R2
SW_DVD9_Windows_
b52450dd5ba8007e2934f5c6e6eda0ce
Svr_Std_and_DataCtr_
2012_R2_64Bit_English_-3_MLF_
X19-53588.ISO
Windows datacenter EA
Server
standard portal
2012 R2
SW_DVD9_Windows_
9a00defab26a046045d939086df78460
Svr_Std_and_DataCtr_
2012_R2_64Bit_English_-4_MLF_X19-82891.ISO
Windows free
Server
2012 R2
Technet 9600.16384.WINBLUE_RTM.
130821-1623_ X64FRE_
SERVERHYPERCORE_ ENUS-IRM_SHV_X64FRE_ ENUS_DV5.ISO
9c9e0d82cb6301a4b88fd2f4c35caf80
7
Setting IPMI Static IP Address
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.
To configure a static IP address for the IPMI port on a node, do the following:
1. Connect a VGA monitor and USB keyboard to the node.
2. Power on the node.
3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.
4. Click the IPMI tab to display the IPMI screen.
7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.
8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on that node in
the pop-up window.
9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up
window.
10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network
gateway in the pop-up window.
11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup
mode.
8
Troubleshooting
This section provides guidance for fixing problems that might occur during a Foundation installation.
For help with IPMI configuration problems, see Fixing IPMI Configuration Problems on page 56.
For help with imaging problems, see Fixing Imaging Problems on page 57.
For answers to other common questions, see Frequently Asked Questions (FAQ) on page 58.
One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the Block &
Node Config screen and correct the IPMI MAC and IP addresses as needed (see Configuring Node
Parameters on page 18).
There is a user name/password mismatch. Go to the Global Configuration screen and correct the IPMI
username and password fields as needed (see Configuring Global Parameters on page 16).
One or more nodes are connected to the switch through the wrong network interface. Go to the back of
the nodes and verify that the first 1GbE network interface of each node is connected to the switch (see
Setting Up the Network on page 13).
The Foundation VM is not in the same broadcast domain as the Controller VMs for discovered nodes
or the IPMI interface for added (bare metal or undiscovered) nodes. This problem typically occurs
because (a) you are not using a flat switch, (b) some node IP addresses are not in the same subnet as
the Foundation VM, and (c) multi-homing was not configured.
If all the nodes are in the Foundation VM subnet, go to the Block & Node Config screen and correct
the IP addresses as needed (see Configuring Node Parameters on page 18).
If the nodes are in multiple subnets, go to the Global Configuration screen and configure multihoming (see Configuring Global Parameters on page 16).
The IPMI interface is not set to failover. You can check for this through the BIOS (see Setting IPMI
Static IP Address on page 54 to access the BIOS setup utility).
The connection is dropping intermittently. If intermittent failures persist, look for conflicting IPs.
[Hyper-V only] SAMBA is not up. If Hyper-V complains that it failed to mount the install share, restart
SAMBA with the command " sudo service smb restart ".
Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free up some
space by deleting extraneous ISO images. In addition, a Foundation crash could leave a /tmp/tmp*
directory that contains a copy of an ISO image which you can unmount (if necessary) and delete.
Foundation needs about 9 GB of free space for Hyper-V and about 3 GB for ESXi or KVM.
The host boots but complains it cannot reach the Foundation VM. The message varies per hypervisor.
For example, on ESXi you might see a "ks.cfg:line 12: "/.pre" script returned with an error" error
message. Make sure you have assigned the host an IP address on the same subnet as the Foundation
VM or you have configured multi-homing (see (see Configuring Global Parameters on page 16). Also
check for IP address conflicts.
Foundation is not running, and the service log complains about permissions.
A crash or abrupt shutdown can cause Foundation to lock its PID file in a way that does not recover
automatically. Enter the following commands to fix this problem:
$
$
$
$
$
When shutting down the Foundation VM, allow it to shutdown gracefully by using a command such as
"shutdown -h now" or by logging out and then powering down the VM.
My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP
assignment will take some time.) If you get a complaint about authentication, double-check your
password. If the problem persists, try resetting the BMC.
Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.
Verify that the LAN interface is set to failover mode in the IPMI settings for each node. You can find this
setting by logging into IPMI and going to Configuration > Network > Lan Interface. Verify that the
setting is Failover (not Dedicate).
The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not
complete (hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables might not
provide the performance necessary to run this test at a reasonable speed.
Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor>
and the install hangs.
The boot order for one or more nodes might be set incorrectly to select the USB over SATA DOM as the
first boot device instead of the CDROM. To fix this, boot the nodes into BIOS mode and either select
"restore optimized defaults" (F3 as of BIOS version 3.0.2) or give the CDROM boot priority. Reboot the
nodes and retry the installation.
I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for
the call back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal
in the Foundation VM and enter the following commands:
$
$
$
$
Refresh the Foundation web page. If the nodes are still stuck, reboot them.
My Foundation VM is complaining that it is out of disk space. What can I delete to make room?
Unmount any temporarily-mounted file systems using the following commands:
$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse
$ sudo umount /tmp/tmp*
$ sudo rm -rf /tmp/tmp*
If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.
I keep seeing the message "tar: Exiting with failure status due to previous errors'tar rf /home/
nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./
persisted_config.json' failed; error ignored."
This is a benign message. Foundation archives your persisted configuration file (persisted_config.json)
alongside the logs. Occasionally, there is no configuration file to back up. This is expected, and you may
ignore this message with no ill consequences.
Do not change the language pack. Only the default English language pack is supported. Changing the
language pack can cause some scripts to fail during Foundation imaging. Even after imaging, changing
the language pack can cause problems for NOS.
[Hyper-V] I cannot reach the CVM console via ssh. How do I get to its console?
See KB article 1701 (https://portal.nutanix.com/#/page/kbs/details%3FtargetId=kA0600000008fJhCAI).
[ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it. See KB
article 1467 ( https://portal.nutanix.com/#/page/kbs/details%3FtargetId=kA0600000008dDxCAI).
I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM
on VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://
forums.virtualbox.org/viewtopic.php?f=8&t=58767.
I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically
creates a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run
the following commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now
This should reboot your machine and reset your adapter to eth0.
I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but
discovery is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6
link-local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in.
(If they are, the Controller VMs might choose to direct their traffic over that interface and never reach
your Foundation VM.) If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and
the 10G ports are connected.
A 10/100 switch is not recommended, but it can be used for a few nodes. However, you may see
timeouts. It is highly recommend that you use a 1G or 10G switch if it is available to you.