You are on page 1of 103

Vmware Interview question and answer

What is Bare Metal?


A bare metal environment is a type of virtualization environment in which the virtualization
hypervisor is directly installed and executed from the hardware. It eliminates the need for a
host operating system by directly interfacing with the underlying hardware to accomplish
virtual machine specific processes.
A bare metal environment may also be called a tier-1 environment.
Bare Metal Environment
A bare metal environment is typically created using bare metal hypervisors that dont
require the support of a host operating system. The hypervisors are installed on the hard
drive and can create virtual machines as in a typical virtualized environment. Each virtual
machine has its separate guest OS and share of memory, computing power and hard drive
storage. The hypervisor has its own device drivers and interacts with each component
directly for any I/O, processing or OS specific tasks.
What is a Hypervisor?
It is a program that allows multiple operating systems to share a single hardware host. Each
operating system appears to have the hosts processor, memory, and other resources all to
itself. However, the hypervisor is actually controlling the host processor and resources,
allocating what is needed to each operating system in turn and making sure that the guest
operating systems (called virtual machines) cannot disrupt each other.
What is the difference between Type 1 and Type 2 Hypervisor?
Type 1 Hypervisor

This is also known as Bare Metal or Embedded or Native Hypervisor.

It works directly on the hardware of the host and can monitor operating systems that
run above the hypervisor.

It is completely independent from the Operating System.

The hypervisor is small as its main task is sharing and managing hardware resources
between different operating systems.

A major advantage is that any problems in one virtual machine or guest operating
system do not affect the other guest operating systems running on the hypervisor.

Examples: VMware ESXi Server, Microsoft Hyper-V, Citrix/Xen Server

1 | Page

Type 2 Hypervisor

This is also known as Hosted Hypervisor.

In this case, the hypervisor is installed on an operating system and then supports
other operating systems above it.

It is completely dependent on host Operating System for its operations

While having a base operating system allows better specification of policies, any
problems in the base operating system affects the entire system as well even if the
hypervisor running above the base OS is secure.

Examples: VMware Workstation, Microsoft Virtual PC, Oracle Virtual Box

What is the hardware version used in VMware ESXi 5.5?


Version 10
Below is the table showing the different version of hardware used in different VMware
products along with their release version
Virtual Hardware
Version

Products

10

ESXi 5.5, Fusion 6.x, Workstation 10.x, Player 6.x

ESXi 5.1, Fusion 5.x, Workstation 9.x, Player 5.x

ESXi 5.0, Fusion 4.x, Workstation 8.x, Player 4.x

ESXi/ESX 4.x, Fusion 2.x/3.x Workstation


6.5.x/7.x,Player 3.x

Workstation 6.0.x

ACE 2.x, ESX 3.x, Fusion 1.x, Player 2.x

3 and 4

ACE 1.x, Player 1.x, Server 1.x, Workstation 5.x,


Workstation 4.x

ESX 2.x, GSX Server 3.x

ESX vs ESXi
ESXi
ESXi
ESXi
ESXi

has no service console which is a modified version of RHEL


is extremely thin hence results in fast installation + fast boot
can be purchased as an embedded hypervisor on hardware
has builtin server health status check
OR

VMware ESX and ESXi are both bare metal hypervisor architectures that install directly on
the server hardware.

2 | Page

Although neither hypervisor architectures relies on an OS for resource management, the


vSphere ESX architecture relied on a Linux operating system, called the Console OS (COS) or
service console, to perform two management functions: executing scripts and installing
third-party agents for hardware monitoring, backup or systems management.
In the vSphere ESXi architecture, the service console has been removed. The smaller code
base of vSphere ESXi represents a smaller attack surface and less code to patch,
improving reliability and security.
Configuratin for maximum comparison for Vmware 5.0; 5.1; 5.5

Here I have summarized the comparison with a list of selected features between vSphere ESXi
5.0, 5.1 and5.5. I have skipped some of the features for detailed overview on all the comparison
factors please visit vmware's official website.
ESXi Virtual Machine Maximum
Items

Maximum
vSphere 5.0

vSphere 5.1

vSphere 5.5

32

64

64

1 TB

1 TB

1 TB

Virtual SCSI targets per virtual machine

60

60

60

Virtual Disks per virtual machine

60

60

60

Virtual NIC per virtual machine

10

10

10

Concurrent remote console connection per


virtual machine

40

40

40

2 TB

2 TB

62 TB

128 MB

128 MB

512 MB

Virtual CPUs per virtual machine

RAM per virtual machine

Virtual disk size

Video memory per virtual machine

ESXi Host maximum


Items

Maximum
vSphere 5.0

3 | Page

vSphere 5.1

vSphere 5.5

Logical CPUs per host

160

160

320

Virtual machine per host

512

512

512

Virtual CPUS per host

2048

2048

4096

Virtual CPUs per core

25

25

32

FT Virtual Disks

16

16

16

RAM per FT virtual machine

64

64

64

FT virtual machine per host

FT virtual CPUs per virtual machine

Memory Maximum

Items

Maximum
vSphere 5.0

vSphere 5.1

vSphere 5.5

2 TB

2 TB

4 TB

1 TB

1 TB

NA

RAM per host


No of swap files per virtual machine
Swap file size

Storage Maximum

Items

Maximum
vSphere 5.0

vSphere 5.1

vSphere 5.5

Virtual Disks per host

2048

2048

2048

LUNs per server

256

256

256

Broadcom 1Gb iSCSI HBA initiator ports per server

Broadcom 10Gb iSCSI HBA initiator ports per server

1024

1024

1024

No. of total paths on a server

4 | Page

No. of paths to a LUN

Software iSCSI targets

256

256

256

LUNs per host

256

256

256

LUN size

NA

64 TB

64 TB

No. of paths to a LUN

32

32

32

1024

1024

1024

No. of HBAs of any type

HBA ports

16

16

16

Targets per HBA

256

256

256

Concurrent vMotion operations per datastore

128

128

128

Concurrent storage vMotion operations per datastore

Concurrent storage vMotion operations per host

Concurrent vMotion operations per host (1Gb/s network)

Concurrent vMotion operations per host (10Gb/s network)

No. of total paths on a server

Cluster and resource pool maximums

Items

Maximum
vSphere 5.0

vSphere 5.1

vSphere 5.5

32

32

32

Virtual machine per cluster

3000

4000

4000

Virtual machines per host

512

512

512

Resource pools per host

1600

1600

1600

Children per resource pool

1024

1024

1024

Resource pools per cluster

1600

1600

1600

Hosts per cluster

vCenter Server maximums

Items
5 | Page

Maximum

vSphere 5.0

vSphere 5.1

vSphere 5.5

Hosts per vCenter Server

1000

1000

1000

Poweredon virtual machines per vCenter Server

10000

10000

10000

Registered virtual machines per vCenter Server

1500

1500

1500

10

10

10

Hosts in linked vCenter Servers

3000

3000

3000

Concurrent vSphere Client connections to vCenter


Server

100

100

100

Number of host per datacentre

500

500

500

50000

50000

50000

Linked vCenter Servers

Registered virtual machines in linked vCenter Servers

Storage DRS

Items

Maximum
vSphere 5.0

vSphere 5.1

vSphere 5.5

9000

9000

9000

Datastores per datastore cluster

32

32

32

Datastore clusters per vCenter

256

256

256

Virtual disks per datastore cluster

Vmware file description


*.nvram file This file contains the CMOS/BIOS for the VM. The BIOS is based off the
PhoenixBIOS 4.0 Release 6 and is one of the most successful and widely used BIOS and is
compliant with all the major standards, including USB, PCI, ACPI, 1394, WfM and PC2001. If
the NVRAM file is deleted or missing it will automatically be re-created when the VM is
powered on. Any changes made to the BIOS via the Setup program (F2 at boot) will be saved
in this file. This file is usually less then 10K in size and is not in a text format (binary).
vmdk files These are the disk files that are created for each virtual hard drive in your VM.
There are 3 different types of files that use the vmdk extension, they are:
*flat.vmdk file - This is the actual raw disk file that is created for each virtual hard drive.
Almost all of a .vmdk file's content is the virtual machine's data, with a small portion allotted
to virtual machine overhead. This file will be roughly the same size as your virtual hard
drive.

6 | Page

*.vmdk file This isn't the file containing the raw data anymore. Instead it is the disk
descriptor file which describes the size and geometry of the virtual disk file. This file is in
text format and contains the name of the flat.vmdk file for which it is associated with and
also the hard drive adapter type, drive sectors, heads and cylinders, etc. One of these files
will exist for each virtual hard drive that is assigned to your virtual machine. You can tell
which flat.vmdk file it is associated with by opening the file and looking at the Extent
Description field.
*delta.vmdk file - This is the differential file created when you take a snapshot of a VM
(also known as REDO log). When you snapshot a VM it stops writing to the base vmdk and
starts writing changes to the snapshot delta file. The snapshot delta will initially be small
and then start growing as changes are made to the base vmdk file, The delta file is a bitmap
of the changes to the base vmdk thus is can never grow larger than the base vmdk. A delta
file will be created for each snapshot that you create for a VM. These files are automatically
deleted when the snapshot is deleted or reverted in snapshot manager.
*.vmx file This file is the primary configuration file for a virtual machine. When you create
a new virtual machine and configure the hardware settings for it that information is stored in
this file. This file is in text format and contains entries for the hard disk, network adapters,
memory, CPU, ports, power options, etc. You can either edit these files directly if you know
what to add or use the Vmware GUI (Edit Settings on the VM) which will automatically
update the file.
*.vswp file This is the VM swap file (earlier ESX versions had a per host swap file) and is
created to allow for memory overcommitment on a ESX server. The file is created when a VM
is powered on and deleted when it is powered off. By default when you create a VM the
memory reservation is set to zero, meaning no memory is reserved for the VM and it can
potentially be 100% overcommitted. As a result of this a vswp file is created equal to the
amount of memory that the VM is assigned minus the memory reservation that is configured
for the VM. So a VM that is configured with 2GB of memory will create a 2GB vswp file when
it is powered on, if you set a memory reservation for 1GB, then it will only create a 1GB vswp
file. If you specify a 2GB reservation then it creates a 0 byte file that it does not use. When
you do specify a memory reservation then physical RAM from the host will be reserved for
the VM and not usable by any other VMs on that host. A VM will not use it vswp file as long
as physical RAM is available on the host. Once all physical RAM is used on the host by all its
VMs and it becomes overcommitted then VMs start to use their vswp files instead of
physical memory. Since the vswp file is a disk file it will effect the performance of the VM
when this happens. If you specify a reservation and the host does not have enough physical
RAM when the VM is powered on then the VM will not start.
*.vmss file This file is created when a VM is put into Suspend (pause) mode and is used to
save the suspend state. It is basically a copy of the VMs RAM and will be a few megabytes
larger than the maximum RAM memory allocated to the VM. If you delete this file while the
VM is in a suspend state It will start the VM from a normal boot up instead of starting the vm
from the state it was when it was suspended. This file is not automatically deleted when the
VM is brought out of Suspend mode. Like the Vswp file this file will only be deleted when the
VM is powered off (not rebooted). If a Vmss file exists from a previous suspend and the VM is
suspended again then the previous file is re-used for the subsequent suspensions. Also note
that if a vswp file is present it is deleted when a VM is suspended and then re-created when

7 | Page

the VM is powered on again. The reason for this is that the VM is essentially powered off in
the suspend state, its RAM contents are just preserved in the vmss file so it can be quickly
powered back on.
*.log file This is the file that keeps a log of the virtual machine activity and is useful in
troubleshooting virtual machine problems. Every time a VM is powered off and then back on
a new log file is created. The current log file for the VM is always vmware.log. The older log
files are incremented with a -# in the filename and up to 6 of them will be retained. (ie.
vmware-4.log) The older .log files are always deleteable at will, the latest .log file can be
deleted when the VM is powered off. As the log files do not take much disk space, most
administrators let them be.
*.vmxf file This is a supplemental configuration file in text format for virtual machines
that are in a team. Note that the .vmxf file remains if a virtual machine is removed from the
team. Teaming virtual machines is a Vmware Workstation feature and includes the ability to
designate multiple virtual machines as a team, which administrators can then power on and
off, suspend and resume as a single object making it particularly useful for testing clientserver environments. This file still exists with ESX server virtual machines but only for
compatibility purposes with Workstation.
*.vmsd file This file is used to store metadata and information about snapshots. This file is
in text format and will contain information such as the snapshot display name, uid, disk file
name, etc. It is initially a 0 byte file until you create your first snapshot of a VM and from
that point it will populate the file and continue to update it whenever new snapshots are
taken. This file does not cleanup completely after snapshots are taken. Once you delete a
snapshot it will still leave the fields in the file for each snapshot and just increment the uid
and set the name to Consolidate Helper presumably to be used with Consolidated Backups
*.vmsn file - This is the snapshot state file, which stores the exact running state of a virtual
machine at the time you take that snapshot. This file will either be small or large depending
on if you select to preserve the VMs memory as part of the snapshot. If you do choose to
preserve the VMs memory then this file will be a view megabytes larger then the maximum
RAM memory allocated to the VM. This file is similar to the vmss (Suspend) file. A vmsn file
will be created for each snapshot taken on the VM, these files are automatically deleted
when the snapshot is removed.
Snapshot Files
When you take a snapshot, you capture the state of the virtual machine settings and the
virtual disk. If you are taking a memory snapshot, you also capture the memory state of the
virtual machine. These states are saved to files that reside with the virtual machine's base
files.
Snapshot Files
A snapshot consists of files that are stored on a supported storage device. A Take Snapshot
operation creates .vmdk, -delta.vmdk, .vmsd, and .vmsnfiles. By default, the first and all
delta disks are stored with the base .vmdk file. The .vmsd and .vmsn files are stored in the
virtual machine directory.

8 | Page

Delta
disk
files

A .vmdk file to which the guest operating system can write. The delta disk
represents the difference between the current state of the virtual disk and the
state that existed at the time that the previous snapshot was taken. When you
take a snapshot, the state of the virtual disk is preserved, which prevents the
guest operating system from writing to it, and a delta or child disk is created.
A delta disk has two files, including a descriptor file that is small and contains
information about the virtual disk, such as geometry and child-parent
relationship information, and a corresponding file that contains the raw data.
Note
If you are looking at a datastore with the Datastore Browser in the vSphere
Client, you see only one entry to represent both files.
The files that make up the delta disk are referred to as child disks or redo logs. A
child disk is a sparse disk. Sparse disks use the copy-on-write mechanism, in
which the virtual disk contains no data in places, until copied there by a write
operation. This optimization saves storage space. A grain is the unit of measure
in which the sparse disk uses the copy-on-write mechanism. Each grain is a
block of sectors that contain virtual disk data. The default size is 128 sectors or
64KB.

Flat file

A -flat.vmdk file that is one of two files that comprises the base disk. The flat
disk contains the raw data for the base disk. This file does not appear as a
separate file in the Datastore Browser.

Databa
se file

A .vmsd file that contains the virtual machine's snapshot information and is the
primary source of information for the Snapshot Manager. This file contains line
entries, which define the relationships between snapshots and between child
disks for each snapshot.

Memor
y file

A .vmsn file that includes the active state of the virtual machine. Capturing the
memory state of the virtual machine lets you revert to a turned on virtual
machine state. With nonmemory snapshots, you can only revert to a turned off
virtual machine state. Memory snapshots take longer to create than nonmemory
snapshots. The time the ESX host takes to write the memory onto the disk is
relative to the amount of memory the virtual machine is configured to use.

A Take Snapshot operation creates .vmdk, -delta.vmdk, vmsd, and vmsn files.
File

Description

vmname-number.vmdk and vmnamenumber-delta.vmdk

Snapshot file that represents the


difference between the current state of
the virtual disk and the state that existed
at the time the previous snapshot was
taken.
The filename uses the following
syntax, S1vm-000001.vmdk where S1vm i
s the name of the virtual machine and the
six-digit number, 000001, is based on the
files that already exist in the directory. The
number does not consider the number of
disks that are attached to the virtual
machine.

9 | Page

vmname.vmsd

Database of the virtual machine's


snapshot information and the primary
source of information for the Snapshot
Manager.

vmname.Snapshotnumber.vmsn

Memory state of the virtual machine at


the time you take the snapshot. The file
name uses the following
syntax, S1vm.snapshot1.vmsn,
where S1vm is the virtual machine name,
and snapshot1 is the first snapshot.
Note
A .vmsn file is created each time you take
a snapshot, regardless of the memory
selection. A .vmsn file without memory is
much smaller than one with memory.

What is a .vmdk file?


This isnt the file containing the raw data. Instead it is the disk descriptor file which
describes the size and geometry of the virtual disk file. This file is in text format and
contains the name of the flat.vmdk file for which it is associated with and also the hard
drive adapter type, drive sectors, heads and cylinders, etc. One of these files will exist for
each virtual hard drive that is assigned to your virtual machine. You can tell which flat.vmdk
file it is associated with by opening the file and looking at the Extent Description field.

Difference between ESXi5.1 and ESXi 5.5


The new features added in VMware vSphere 5.5 and also some enhanced features from older
versions.
vSphere ESXi Hypervisor Enahncements

Hot Pluggable PCIe SSD Devices

Support for Reliable Memory Technology

Enhancements for CPU C-States


VMware vCenter Server Enhancements

VMware vSphere Web Client

VMware vCenter Server Appliance

10 | P a g e

vSphere App HA

Virtual MachineVirtual Machine Affinity Rules Enhancements

vSphere Big Data Extensions


vSphere Storage Enhancements

Support for 62TB VMDK

MSCS Updates

vSphere 5.1 Feature Updates

16GB E2E support

vSphere Flash Read Cache


vSphere Networking Enhancements

Link Aggregation Control Protocol Enhancements

Traffic Filtering

Quality of Service Tagging

40GB NIC support

vSphere ESXi Hypervisor Enahncements


Hot Pluggable PCie SSD Devices
In earlier versions VMware had a feature to swap storage devices such as SATA and SAS hard
disks without affecting the running virtual machines reducing the amount of downtime. With
vSphere 5.5 users can now also hot-add or hot-remove an Solid State Disks (SSD) while a
vSphere host is running.
Support for Reliable Memory Technology
There is a feature in CPU hardware which can provide the details of a portion of the memory
which is considered as more "reliable" with respect to to other sections of memory. vSphere
5.5 is utilizing this feature to get memory information and runs the VMkerenl on this section
of memory to reduce the slightest probability of any memory crash or failure which in case
could futher lead to adverse effects to VMware Hypervisor.
Enhancements for CPU C-States
In vSphere 5.1 and earlier versions CPU P state was used which are responsible for lowering
the CPU multiplier and CPU voltage when there is no work load. But with vSphere 5.5 CPU CState has been introduced which has advanced CPU current lowering technologies. Another
potential benefit of reduced power consumption is with inherent increased performance.

11 | P a g e

VMware vCenter Server Enhancements


VMware vSphere Web Client
Enhanced features have been added to vSphere web client also providing full support for
Mac OS X. The vSphere web client also includes the following new features
Drag and Drop: Administrators now can drag and drop objects from the center panel onto
the vSphere inventory, enabling them to quickly perform bulk actions.
Filters: Administrators can now select properties on a list of displayed objects and selected
filters to meet specific search criteria. Displayed objects are dynamically updated to reflect
the specific filters selected.
Recent Items: Administrators spend most of their day working on a handful of objects. The
new recent items navigation aid enables them to navigate with ease, typically by using one
click between their most commonly used objects.
vCenter Server Appliance
With the release of vSphere 5.5, the vCenter Server Appliance addresses this with a re
engineered, embedded vPostgres database that can now support as many as 100 vSphere
hosts or 3,000 virtual machines (with appropriate sizing). With new scalability maximums
and simplified vCenter Server deployment and management, the vCenter Server Appliance
offers an attractive alternative to the Windows version of vCenter Server when planning a
new installation of vCenter Server 5.5.
vSphere App HA
This is an additional feature added to vSphere 5.5. In earlier versions we had vSphere HA
which enabled virtual machine monitoring and in case of any heart beat failures within a
given amount of time the vShphere HA would reset the vm. But using vSphere App HA any
critical application service and be monitored and in case of any failure the application or the
virtual machine can be restarted.
Virtual MachineVirtual Machine Affinity Rules Enhancements
As we know the usage of DRS in earlier version of VMware vSphere but in vSphere 5.5 an
additional rule is added to DRS where user can define the virtual machines which should be
kept together on same host or separate hosts. This rule is termed as virtual machinevirtual

12 | P a g e

machine anti affinity rule. So now before attempting to move any vm during a resource
outage this rule plays its part and migrates the vm accordingly.
VMware vSphere Data Protection Enhancements
New enhanced features has been added to VMware vpshere Data Protection which is a
backup and recovery solution for VMware virtual machines. The following enhancements
were made

Direct-to-host emergency restore:

Backup

and

restore

of

individual

virtual

machine

hard

disks

(.vmdk

files): Individual .vmdk files can be selected for backup and restore operations.

Replication to EMC Avamar

Flexible storage placement

Mounting of existing backup data storage to new appliance

Scheduling granularity

vSphere Big Data Extensions


BDE is a tool that enables administrators to deploy and manage Hadoop clusters on vSphere
from a familiar vSphere Web Client interface.
BDE performs the following functions on the virtual Hadoop clusters it manages:

Creates, deletes, starts, stops and resizes clusters

Controls resource usage of Hadoop clusters

Specifies physical server topology information

Manages the Hadoop distributions available to BDE users

Automatically scales

vSphere Storage Enhancements


Support for 62TB VMDK
VMware is increasing the maximum size of a virtual machine disk file (VMDK) in vSphere 5.5.
The previous limit was 2TB512 bytes. The new limit is 62TB. The maximum size of a virtual
Raw Device Mapping (RDM) is also increasing, from 2TB512 bytes to 62TB. Virtual machine
snapshots also support this new size for delta disks that are created when a snapshot is
taken of the virtual machine.
Enhancements in MSCS

13 | P a g e

In vSphere 5.5, VMware supports the following features related to Microsoft Cluster Service
(MSCS):

Microsoft Windows 2012

Round-robin path policy for shared storage

iSCSI protocol for shared storage

Fibre Channel over Ethernet (FCoE) protocol for shared storage

16GB E2E Support


In vSphere 5.1, VMware introduced support to run these HBAs at 16Gb. However, there is no
support for full, end-to-end 16Gb connectivity from host to array. To get full bandwidth, a
number of 8Gb connections must be created from the switch to the storage array.
In vSphere 5.5, VMware introduces 16Gb end-to-end FC support. Both the HBAs and array
controllers can run at 16Gb as long as the FC switch between the initiator and target
supports it.

vSphere Networking Enhancements


Link Aggregation Control Protocol (LACP) Enhancements
LACP is a method to bundle several physical network links to form logical channel for
increased bandwidth and redunancy purposes. The fllowing key enhancements are available
in vSPhere Distributed Switch with vSphere 5.5:

22 new hashing algorith options are available

64 LAG (Link Aggregation Group) per host

Traffic Filtering
Traffic filtering is the ability to filter packets based on the various parameters of the packet
header. This capability is also referred to as access control lists (ACLs), and it is used to
provide port-level security.
Quality of Service tagging
QOS is responsible for differentiating traffic importance and helps reserving bandwidth
accordingly. VMware has supported 802.1p tagging on VDS since vSphere 5.1. The 802.1p
tag is inserted in the Ethernet header before the packet is sent out on the physical network.
In vSphere 5.5, the DSCP marking support enables users to insert tags in the IP header. IP
headerlevel tagging helps in layer 3 environments, where physical routers function better
with an IP header tag than with an Ethernet header tag.
40GB NIC Support

14 | P a g e

Support for 40GB NICs on the vSphere platform enables users to take advantage of higher
bandwidth pipes to the servers.

Summary
So this was a brief article on few of the enhancements and new features added in vSphere
5.5 as compared to earlier versions. To summarize this article I have separated the points in
two sub sections as shown below
New features in vSphere 5.5

Hot plug SSD PCIe devices

vSphere App HA

Reliable memory Technology:

vSphere DRS virtual machinevirtual machine affinity rule

vSphere Big Data Extensions

Support for 62TB VMDK

16GB E2E support

40GB NIC Support

Enhanced features in vSphere 5.5

Enhancements to CPU C-states

Expanded virtual graphics support

vSphere Web Client platform support and UI improvements

MSCS updates

PDL AutoRemove

Improved LACP capabilities

Traffic filtering

Quality of Service tagging


Difference Between vSphere 5.1 and vSphere 5.5
Features

15 | P a g e

vSphere 5.1

vSphere 5.5

Physical CPUs per host

160

320

Physical RAM per host

2 TB

4 TB

NUMA nodes per host

16

2048

4096

VMDK Size

2TB

62 TB

Max Size of Virtual RDM

2TB

62 TB

10

No

yes

32 GB

unlimited

8-way virtual SMP

8-way virtual SMP

Support to run these


HBAs at 16Gb.
However, there is no
support for full, end-toend 16Gb connectivity
from host to array.

Yes

APP HA

No

Yes

vFlash Read Cache support

No

Yes

VMware VSAN support

No

Yes

only NVIDIA

NVIDIA, AMD and Intel GPU

5 Hosts and 50 Virtual


Machines

100 Hosts and 3000 Virtual


Machines

Microsoft Windows 2012 Cluster


Support

No

Yes

PDL (Permanent Device Loss)


AutoRemove

No

Intoduced in vSphere 5.5

Graphics acceleration support for


Linux
Guest OS

No

Yes

Hot-pluggable SSDPCIe devices

No

Yes

Maximum vCPUs per host

VM Hardware Version
40 GBps physical Adapter support
ESXi Free version RAM limit
ESXi Free version maximum vSMP

16 GB fibre channel End-to-End


support

Expanded v-GPU and G-GPU support


vCenter Server Appliance With
Embedded Database support upto

16 | P a g e

Support for Reliable Memory


Technology

No

Yes

Host power
management
leveraged only the
performance state (Pstate), which kept the
processor running at a
lower frequency and
voltage

Processor power state (Cstate)


also is used, providing
additional power savings
and increased Performance

LSI SAS support for Oracle Solaris 11


OS

No

Yes

vSphere Big Data Extensions

No

Yes

SATA-based virtual device nodes via


AHCI (Advanced Host Controller
Interface) support

No

Yes (Support upto 120


devices per VM)

Improved LACP Support

one LACP group per


distributed switch

Supports up to 64

Multiple point-in-time replicas

vSphere Replication
kept
only the most recent
copy
of a virtual machine

Version 5.5 can keep up to


24 historical snapshots

CPU C-state Enhancement

Diffrence between ESX and ESXi


How to access the ESX and ESXi?
ESXi Shell Access with the Direct Console
An ESXi system includes a direct console (also called DCUI) that allows you to start and stop
the system and to perform a limited set of maintenance and troubleshooting tasks. The
direct console includes the ESXi Shell, which is disabled by default. You can enable the ESXi
Shell in the direct console or by using the vSphere Client. You can enable local shell access
or remote shell access:
Local shell access allows you to log in to the shell directly from the Direct Console.
See Enabling Local ESXi Shell Access.
Remote shell (SSH) access allows you to connect to the host using a shell such as PuTTY,
specify a user name and password, and run commands in the shell.
The ESXi Shell includes all ESXCLI commands, a set of deprecated esxcfg- commands, and a
set of commands for troubleshooting and remediation.
Important All ESXCLI commands that are available in the ESXi Shell are also included in the
vCLI package.
VMware recommends you install the vCLI package on a supported Windows or Linux system
or deploy the vMA virtual appliance, and run commands against your ESXi hosts. Run

17 | P a g e

commands directly in the ESXi Shell in troubleshooting situations only.

Enabling Local ESXi Shell Access


You can enable the ESXi Shell from the direct console or from the vSphere Client.
If you have access to the direct console, you can enable the ESXi Shell from there.
To enable the ESXi Shell in the direct console
1 At the direct console of the ESXi host, press F2 and provide credentials when prompted.
2 Scroll to Troubleshooting Options and press Enter.
3 Choose Enable ESXi Shell and press Enter.
On the left, Enable ESXi Shell changes to Disable ESXi Shell. On the right, ESXi Shell is
Disabled changes to ESXi Shell is Enabled.
4 Press Esc until you return to the main direct console screen.
If you do not have access to the direct console, you can enable the ESXi Shell from the
vSphere Client.
To enable the local or remote ESXi Shell from the vSphere Client
Select the host, click the Configuration tab, and click Security Profile in the Software
1 panel.
2 In the Services section, click Properties.
3 Select ESXi Shell and click Options.
4 Change the ESXi Shell options.
To change the Startup policy across reboots, click Start and stop with host and reboot
the host.
To temporarily start or stop the service, click the Start or Stop button.
5 Click OK.
After you have enabled the ESXi Shell, you can use it from that monitor or through a serial
port.
The ESXi Shell timeout setting specifies how long you can leave an unused session open. By
default, the timeout for the ESXi Shell is 0, which means the session remains open even if it
is unused. If you change the timeout, for example, to 30 minutes, you have to log in again
after the timeout period has elapsed.
Note If you are logged in when the timeout period elapses, your session will persist. However, the
ESXi Shell will be disabled, preventing other users from logging in.
Setting Timeouts for the ESXi Shell
The ESXi Shell supports availability timeout and idle timeouts. By default, each timeout is
disabled.
Availability timeout. The amount of time that can elapse before you must log in after
the ESXi Shell is enabled. After the timeout period, the service is disabled and users are
not allowed to log in.
Idle timeout. The amount of time that can elapse before the user is logged out of an
idle interactive sessions. Changes to the idle timeout apply the next time a user logs in
to the ESXi Shell and do not affect existing sessions.
To set ESXi Shell timeouts from the Direct Console
From the Troubleshooting Mode Options menu, select Modify ESXi Shell and SSH
1 timeouts and press Enter.

18 | P a g e

2 Enter the availability timeout, in seconds, and press Enter.


3 Enter the idle timeout, in seconds, and press Enter.
4 Press Esc until you return to the main menu of the Direct Console Interface.
To set ESXi Shell timeouts from the vSphere Web Client
1 Select the host in the inventory, click the Manage tab, and click Settings.
2 Under System, select Advanced System Settings.
3 In the left panel, click UserVars.
4 Select UserVars.ESXiShellTimeOut and click the Edit icon
5 Enter the availability timeout in minutes.
You must restart the SSH service and the ESXi Shell service for the timeout to take effect.
6 Select UserVars.ESXiShellInteractiveTimeOut and click the Edit icon
7 Enter the availability timeout in minutes.
You must restart the SSH service and the ESXi Shell service for the timeout to take effect.
8 Click OK.
Using the Local ESXi Shell
After you enable the ESXi Shell in the direct console, you can use it from main direct console
screen or remotely through a serial port.
To use the local ESXi Shell
At the main direct console screen, press Alt-F1 to open a virtual console window to the
1 host.
2 Provide credentials when prompted.
When you type the password, characters are not displayed on the console.
3 Enter shell commands to perform management tasks.
4 To log out, type exit in the shell.
5 To return to the direct console, type Alt-F2.
See vSphere Installation and Setup documentation for information on serial port setup.
ESXi server port number for ssh, VUM, vmotion, SCSI,HA
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1012382
Difference between clone and template?

Clone
Clone creates an exact copy of a running
Virtual Machine at the time of cloning
process

19 | P a g e

Template
Template acts as a baseline image with the
predefined configuration as per organization
standards

Cloning a virtual machine creates a exact


duplicate copy of the virtual machine with
the same configuration and installed
software without performing any additional
settings.

Create a template to create a master image


of a
virtual machine from which you can deploy
multiple virtual machines

You can create a clone of existing installed


and Configured running virtual machine by
right clicking the VM and Clone.

You can create a template by converting a


virtual machine
to a template, cloning a virtual machine to a
template, or cloning another template

VM clones are best suited in test and


development environments where you want
to create, test and work with exact copies of
production servers without disturbing
production servers by creating clone of the
production virtual machine.

Templates are best suited for production


environments where you want the mass
deployment of virtual machines along with
the installed OS and basic software,
configured policy as per the security policy of
your organization as a base Machine. Once
template is deployed, you can install
software depend on the role of the server
like IIS,Database

VM Clones are not suited for mass


deployment of Virtual Machines

Templates are best suited for Mass


Deployment of Virtual Machines

We Cannot Convert back the Cloned Machine

You can convert the template back to virtual


machine to update the base template with
the latest released patches and updates and
to install or upgrade any software and again
convert back to template to be used for
deployment of virtual machines with latest
patches.

Cloned Virtual Machine Can be powered on

Templates cannot be powered on

You cannot Clone a Virtual Machine if you


have connected directly to ESX/ESXi host
using vSphere Client

You cannot create a template of a Virtual


Machine if you have connected directly
to ESX/ESXi host using vSphere Client

You can customize the guest operating


system of the clone to change the virtual
machine name, network settings, and other
properties. This prevents conflicts that can
occur if a virtual machine and a clone with
identical guest operating system settings are
deployed simultaneously.

You can also Customize the guest operating


system while deploying from template.

Clone of a virtual machine can be created


when the virtual machine is powered on.

Convert virtual Machine to template cannot


be performed, when Virtual machine is
powered on. Only Clone to Template can be

20 | P a g e

performed when VM is powered on.

Difference between clone, template, snapshort


A clone is a copy of a virtual machine. A template is a master copy of a virtual machine that
can be used to create many clones.
When you clone a virtual machine, you create a copy of the entire virtual machine, including
its settings, any configured virtual devices, installed software, and other contents of the
virtual machine's disks. You also have the option to use guest operating system
customization to change some of the properties of the clone, such as the computer name
and networking settings.
Cloning a virtual machine can save time if you are deploying many similar virtual machines.
You can create, configure, and install software on a single virtual machine, and then clone it
multiple times, rather than creating and configuring each virtual machine individually.
A template is a master copy of a virtual machine that can be used to create and provision
virtual machines. Templates cannot be powered on or edited, and are more difficult to alter
than ordinary virtual machine. A template offers a more secure way of preserving a virtual
machine configuration that you want to deploy many times.
A snapshot preserves the state and data of a virtual machine at a specific point in time.

The state includes the virtual machines power state (for example, powered-on,
powered-off, suspended).

The data includes all of the files that make up the virtual machine. This includes
disks, memory, and other devices, such as virtual network interface cards.
How to create a new VM template on VMware vSphere ?
A VM template is a master copy of the virtual machine which can be used to create a new
virtual machines in a few clicks. Normally template will be used to create a similar type of
machines. For an example, to build a web-server on Redhat Linux,
1.

You need to create a virtual machine

2.

Install Redhat Linux Operating system

3.

Install the necessary software for Apache

4.

Install Apache.

You have to setup the things only for the first VM if you are going to use template. Using that
newly created VM, You can create a template which will be act as master copy for future
provisioning. So the bottom line is that template has the operating system installed virtual
machine and set of installed applications on that VM.
We can create a new virtual machine template from existing virtual machine or you can
covert the virtual machine as template. Here we will see how we can create a new template
from existing VM.
1. Login to vSphere Client and select the VM from which you want to generate new template.

21 | P a g e

Select the VM
2.Right click the VM and Select clone to template. If you select convert to template, VM
will be converted as template permanently.

Right Click to make template


3. Enter the meaning full template name.

Enter Meaning full template Name


4. Select the cluster or ESXi host .

22 | P a g e

Select the cluster or ESXi host


5. Select the datastore for VM template.

Select the datastore


6. Click Finish, once you have reviewed the settings to create a new VM template.

Finish the wizard


7. Click on the recent tasks to check the template clone status.

23 | P a g e

VM template clone
8. Once the clone is completed, click on the VM & Template tab. Here you can see the
template details which you have created by cloning the existing VM.

Template Details
Using the VM template, you can create new virtual machines in few clicks. But configuring
the IP address, setting the unique host name and configuring the application needs to be
done after creating the new VM manually.
What is VMware vMotion and what are its requirements?
VMware VMotion enables the live migration of running virtual machines from one physical
server to another with zero downtime.
VMotion lets you:

Automatically optimize and allocate entire pools of resources for maximum hardware
utilization and

Availability.

Perform hardware maintenance without any scheduled downtime.

Proactively migrate virtual machines away from failing or underperforming servers.

Below are the pre-requisites for configuring vMotion

24 | P a g e

Each host must be correctly licensed for vMotion

Each host must meet shared storage requirements

vMotion migrates the vm from one host to another which is only possible with both
the host are sharing a common storage or to any storage accessible by both the
source and target hosts.

A shared storage can be on a Fibre Channel storage area network (SAN), or can be
implemented using iSCSI SAN and NAS.

If you use vMotion to migrate virtual machines with raw device mapping (RDM) files,
make sure to maintain consistent LUN IDs for RDMs across all participating hosts.

Each host must meet the networking requirements

Configure a VMkernel port on each host.

Dedicate at least one GigE adapter for vMotion.

Use at least one 10 GigE adapter if you migrate workloads that have many memory
operations.

Use jumbo frames for best vMotion performance.

Ensure that jumbo frames are enabled on all network devices that are on the vMotion
path including physical NICs, physical switches and virtual switches.
VLAN Tagging in ESX (VST,EST & VGT)

There are 3 types of VLAN tagging available in Vsphere.


1.Virtual Switch Tagging (VST)
2.External Switch Tagging (EST)
3.Virtual Guest Tagging (VGT)
There is no specific settings named VLAN Tagging is avialable in the vpshre host
network settings. VLAN tagging is determined by the VLAN value specified at the port
group and it tells the vswitch or Phyiscal switch or Virtual machines to how to handle the
VLAN tagging.
1. Virtual Switch Tagging (VST)
1.1 VST uses 802.1q VLAN trunks and tagged traffic.
1.2 VLAN tagging for all packets is performed by the Virtual Switch before leaving the

25 | P a g e

ESX/ESXI host
1.3 Port groups on the Virtual switch of ESX server should be configured with VLAN ID (14094)
1.4 vSwitch responsibilities is to strip off the vlan tag and send packet to virtual machine
in corresponding port group.
1.5 Reduces the number of Physical nics on the server by running all the VLANs over one
physical nic. Better solution would be keeping 2 nics for redundancy.
1.6 Reduces number of cables from ESX server to physical switch.
1.7 The physical switch port connecting the uplink from the ESX should be configured as
Trunk port.
1.8 virtual machine network Packet is delivered to vSwitch and before it is sent to
physical switch the packet is tagged with vlan id according to the port group membership
of originating virtual machine.

2.External Switch Tagging (EST)


2.1 In EST, ESX host doesnt see any vlan tags and does not handle any VLAN tagging.
2.2 All the tagging operation is done by physical switch and virtual switch is not aware
about that.
2.3 Number of physical nics = no of VLANs connected to ESX
2.4 Port groups on the Virtual switch of ESX server need not to be configured with the
VLAN number or configure VLAN ID 0 (if it is not native VLAN)
2.5 Count of NICS and cable connected to ESX is more as compared to VST approach.
2.6 The physical switch port connecting the uplink from the ESX should be configured as
Access port assigned to specific VLAN.
2.7 virtual machine network Packet is delivered to physical switch without any tagging
operation performed by the virtual switch.

26 | P a g e

3. Virtual Guest Tagging (VGT)


3.1 you must install 8021.Q VLAN trunking driver instide virtual machine guest operating
system.
3.2 All the VLAN tagging is performed by the virtual machine with use of trunking driver
in the guest.
3.3 VLAN tags are understandable only between the virtual machine and external switch
when frames are passed to/from virtual switches.
3.4 Virtual Switch will not be involved or aware of this operation. Vswitch only forwards
the packets from Virtual machine to physical switch and will not perform any operation.
3.5 Port group of the virtual machine should be configured with VLAN ID 4095
3.6 The physical switch port connecting the uplink from the ESX should be configured as
Trunk port

Below is comparison table for the people want a comparison under single table

27 | P a g e

Virtual Disk Provisioning Policies


Thick
Provision
Lazy
Zeroed

Creates a virtual disk in a default thick format. Space required for the virtual
disk is allocated when the virtual disk is created. Data remaining on the
physical device is not erased during creation, but is zeroed out on demand at
a later time on first write from the virtual machine.
Using the default flat virtual disk format does not zero out or eliminate the
possibility of recovering deleted files or restoring old data that might be
present on this allocated space. You cannot convert a flat disk to a thin disk.

Thick
Provision
Eager
Zeroed

A type of thick virtual disk that supports clustering features such as Fault
Tolerance. Space required for the virtual disk is allocated at creation time. In
contrast to the flat format, the data remaining on the physical device is
zeroed out when the virtual disk is created. It might take much longer to
create disks in this format than to create other types of disks.

Thin
Provision

Use this format to save storage space. For the thin disk, you provision as
much datastore space as the disk would require based on the value that you
enter for the disk size. However, the thin disk starts small and at first, uses
only as much datastore space as the disk needs for its initial

What is HA?

VMware HA i.e. High Availability which works on the host level and is configured on
the Cluster.

A Cluster configured with HA will migrate and restart all the vms running under any of
the host in case of any host-level failure automatically to another host under the
same cluster.

VMware HA continuously monitors all ESX Server hosts in a cluster and detects
failures.

VMware HA agent placed on each host maintains a heartbeat with the other hosts in
the cluster using the service console network. Each server sends heartbeats to the
others servers in the cluster at five-second intervals. If any servers lose heartbeat
over three consecutive heartbeat intervals, VMware HA initiates the failover action of
restarting all affected virtual machines on other hosts.

You can set virtual machine restart priority in case of any host failure depending upon
the critical nature of the vm.

NOTE: Using HA in case of any host failure with RESTART the vms on different host so the
vms state will be interrupted and it is not a live migration
How HA works?
how to setup HA for a VM?

28 | P a g e

Applying a VMware HA customization


Using the vSphere Web Client
1. Log in to VMware vSphere Web Client.
2. Click Home > vCenter > Clusters.
3. Under Object click on the cluster you want to modify.
4. Click Manage.
5. Click vSphere HA.
6. Click Edit.
7. Click Advanced Options.
8. Click Add and enter in Option and Value fields as appropriate (see below).
9. Deselect Turn ON vSphere HA.
10. Click OK.
11. Wait for HA to unconfigure, click Edit and check Turn ON vSphere HA.
12. Click OK and wait for the cluster to reconfigure.
Using the vSphere Client
1. Log in to vCenter Server with vSphere Client as an administrator.
2. Right-click the Cluster in the Inventory and click Edit Settings.
3. Click VMware HA.
4. Click the Advanced Options button.
5. Enter Option and Value fields as appropriate (see below).
6. Click OK.
7. Click OK again.
8. Wait for the Reconfigure Cluster task to complete and right-click the Cluster again
from the Inventory.
9. Click Properties.
10. Disable VMware HA and wait for the Reconfiguration Cluster task(s) to complete.
11. Right-click the cluster and Enable VMware HA to have the settings take effect.
Note: See below if reconfiguration of the hosts is necessary.
There are three types of HA advanced options and each is set in a different way.

29 | P a g e

vCenter Server options (VC) -- these options are configured at the vCenter Server
level and apply to all HA clusters unless overridden by cluster-specific options in
cases where such options exist. If the vCenter Server options are configured using
the vCenter Server options manager, a vCenter Server restart may not be required -see the specific options for details. But if these options are configured by adding the
option string to the vpxd.cfg file (as a child of the config/vpxd/das tag), a restart is
required.

Cluster options (cluster) -- these options are configured for an individual cluster and if
they impact the behavior of the HA Agent (FDM), they apply to all instances of FDM in
that cluster. These options are configured by using the HA cluster-level advanced
options mechanism, either via the UI or the API. Options with names starting with
"das.config." can also be applied using the "fdm options" mechanism below, but this
is not recommended because the options should be equally applied to all FDM
instances.

fdm options (fdm) -- these options are configured for an individual FDM instance on a
host. They are configured by adding the option to
the /etc/opt/vmware/fdm/fdm.cfg file of the host as a child of the config/fdm tag.
Options set in this way are lost when fdm is uninstalled (for example if the host is
removed from vCenter Server and then re-added) or if the host is managed by Auto
Deploy and is rebooted.

Common Options

Versi
on
Name

Description

Type
of
Reconfigurat Opti
ion
on

Cluster Configuration
Allows you to specify the
specific management
networks used by HA, where X
is a number between 0 and 9.
For example if you set a value
to Management Network,
only the networks associated
with port groups having this
name are used. Ensure that all
hosts are configured with the
named port group and the
networks are compatible. In
5.5, this option is ignored if
vSAN is enabled for the
cluster.

5.0,
5.1,
5.5

das.allowNetworkX

5.0,
5.1,
5.5

HA will report a config issue on


a host if the host is not
configured with redundant
networks for the networks
used by HA. Prior to 5.5, HA
only uses management
networks, while in 5.5, if vSAN
is enabled, HA will use the
networks configured for vSAN.
Valid values are true/false. Set
to true to suppress the config
das.ignoreRedundantNetWarnin issue. False is assumed if the
g
option is not set.

30 | P a g e

Yes.
Reconfigure
HA on all hosts
to have the
specification
Clust
take effect.
er

Yes.
Reconfigure
HA on a host
to have the
config issue
for that host
cleared.

Clust
er

HA chooses by default 2
heartbeat datastores for each
host in an HA cluster. This
option can be used to increase
the number to a value in the
range of 2 to 5 inclusive.

Yes.
Reconfigure
HA on all hosts Clust
in the cluster. er

5.0,
5.1,
5.5

das.heartbeatDsPerHost

5.0,
5.1,
5.5

HA will report a host config


issue if it was not able to
select the required number of
datastores for a host given by
das.heartbeatDsPerHost. Set
this option to true to suppress
this warning, and false to
enable it. A value of false is
das.ignoreInsufficientHbDatasto assumed if the option is not
re
set.

5.0,
5.1,
5.5

Whether to check the cluster


for compliance with Fault
Tolerance as part of the cluster
profile compliance check. Set
this option to false if you don't
plan to use FT in the cluster. A
value of true enables the
checks. If unset, a value of
das.includeFTcomplianceChecks true is assumed.
No

Yes.
Reconfigure
HA on all hosts Clust
in the cluster. er

Clust
er

Admission Control

5.0,
5.1,
5.5

5.0,
5.1,
5.5

5.0,
5.1,
5.5
5.0,
5.1,
5.5

das.vmMemoryMinMB

Value in MB to use for the


memory reservation of a
virtual machine if no non-zero
memory reservation is set by
a user. 0 is assumed if the
option is not set.

No

Clust
er

das.vmCpuMinMHz

Value in MHz to use for the


CPU reservation of a virtual
machine if no non-zero CPU
reservation are set by a user.
32 is assumed if the option is
not set.

No

Clust
er

das.slotCpuInMHz

Maximum value in MHz to use


for CPU component of the slot
size. No limit is imposed if the
option is not set. In 5.1, the
CPU component of the slot
size can be exactly specified
in the UI and the API (see the
vim.cluster.slotPolicy object).
Note that this option and the
UI/API behave differently -this option sets a max while
the UI/API sets the exact
value. If a slot policy is defined
and this option is specified,
the value specified by this
option is ignored.
No

Clust
er

das.slotMemInMB

31 | P a g e

Maximum value in MB to use


No
for memory component of the
slot size. No limit is imposed if
the option is not set. In 5.1,
the memory component of the

Clust
er

slot size can be exactly


specified in the UI and the API
(see the vim.cluster.slotPolicy
object). Note that this option
and the UI/API behave
differently -- this option sets a
max while the UI/API sets the
exact value. If a slot policy is
defined and this option is
specified, the value specified
by this option is ignored.
Restarting virtual machines

5.0,
5.1,
5.5

5.0,
5.1,
5.5

5.0,
5.1,
5.5

das.maxvmrestartcount

The maximum number of


times an FDM master will try
to restart a virtual machine
before giving up. Five
attempts are made if this
option is unset. This limit only
applies if the time since the
first restart attempt was made
is less than
das.maxvmrestartperiod. Note
that FT secondary virtual
machine restarts are governed
by the separate parameter,
das.maxftvmrestartcount.
Warning: Setting this value to
a very high number creates a
large amount of extra logging
which can have an impact on
your system log directories.
No

Clust
er

das.maxvmrestartperiod

The maximum amount of time


(in seconds) during which a
FDM master will attempt to
restart a virtual machine after
the first restart attempt failed.
The time is measured from
when the FDM master first
tried to restart the virtual
machine. This time limit takes
precedence over
das.maxvmrestartcount. No
time limit is imposed if this
option is unset.
No

Clust
er

das.maxftvmrestartcount

The maximum number of


times an FDM master will try
to start a secondary virtual
machine for an FT virtual
machine pair before giving up.
Five attempts are made if this
option is unset.
Warning: Setting this value to
a very high number creates a
large amount of extra logging
which can have an impact on
your system log directories
No

Clust
er

5.0U1, das.maskCleanShutdownEnable When a virtual machine


No
5.1,
d
powers off and its home
5.5
datastore is not accessible, HA
cannot determine whether the

32 | P a g e

Clust
er

virtual machine should be


restarted. So, it must make a
decision. If this option is set to
false, the responding FDM
master will assume the virtual
machine should not be
restarted, while if this option is
set to true, the responding
FDM will assume the virtual
machine should be restarted.
If the option is unset in 5.0U1,
a value of false is assumed,
whereas in ESXi 5.1 and later,
a value of true is assumed.
Respect vm-vm anti-affinity
rules when restarting virtual
machines after a failure. The
das.respectVmVmAntiAffinityRul valid values are "false"
es
(default) and "true"

No

Clust
er

5.0,
5.1,
5.5

das.isolationAddressX

IP addresses an FDM agent


uses to check for isolation
when no agent network traffic
is observed on the network(*)
used by HA, where X = 0-9.
HA will use the default
management-network
gateway as an isolation
address by default plus those
specified by this advanced
option as additional addresses
to check. We recommend
adding an isolation address for
each management network
used by HA.(*) Prior to 5.5, HA
uses only the management
network, but in 5.5 when vSAN
is also enabled on the cluster,
HA will use the vSAN network
for inter-agent
communication.
No

Clust
er

5.0,
5.1,
5.5

Whether the default isolation


address (gateway of
management network) should
be used when determining if a
host is network isolated. Valid
values are true/false. By
default, the management
network default gateway is
used. If the default gateway is
a non-pingable address, set
the das.isolationaddressX to
a pingable address and
disable the usage of the
default gateway by setting
das.useDefaultIsolationAddress this option to false.
No

Clust
er

5.5

Isolation Response

5.1,
5.5

das.config.fdm.isolationPolicyDe The number of seconds an


laySec
FDM agent waits before
executing the isolation policy
once it has determined that

33 | P a g e

No

Clust
er

the host is isolated. The


minimum value is 30. If set to
a value less than 30, the
delay is 30 seconds.

5.0,
5.1,
5.5

das.isolationShutdownTimeout

The number of seconds an


FDM waits for a virtual
machine to power off after
initiating a guest shutdown
before the FDM issues a power
off. If the option is unset, 300s
is used.
No

Clust
er

Virtual machine/App Monitoring

5.0,
5.1,
5.5

das.iostatsInterval

If an FDM detects that a


sufficient number of VMtools
heartbeats are missing to
trigger a virtual machine's
configured virtual
machine/App monitoring
policy, the FDM checks if any
I/O have been issued in the
last ioStatsInterval, and will
only reset the virtual machine
if no I/O occurred in this
interval. Values of 0 or greater
are valid. 120s is assumed if
the option is unset.
No

Clust
er

Fault Tolerance

5.0,
5.1,
5.5

das.maxFtVmsPerHost

Specifies the number of Fault


Tolerance virtual machines
that can be run on a host at
one time. If unset, a value of 4
is used. A value of -1 or 0
disables the limit. The limit is
enforced by vCenter Server
when executing user initiated
power ons and vmotions, and
by DRS when doing initial
placement and load balancing.
HA does not enforce this limit
to maximize uptime. DRS does
not correct any violations of
this limit.
No

Clust
er

Logging
5.0,
5.1,
5.5

das.config.log.maxFileNum

34 | P a g e

Controls the number of FDM


Yes
log-file rotations retained by
the FDM file-based logger. The
file-based logger is used by
default only by the FDM when
running on ESX versions
earlier than ESX 5.0. If you
wish to change the number of
log-file rotations maintained
for a pre ESX 5.0 host, set this
option to the desired number
of log files. For ESX 5.0 and
later hosts, the FDM logs to
syslog by default and so you
need to use the syslog
configuration mechanism to

Clust
er

change the amount of


retained logging history.
However, it is possible to
enable the file-based logger
for ESXi 5.0 and later
hosts hosts also. To do so, set
this option to a valid value. If
you are using vSphere 5.0
Update 1 or later, you must
also set the option
das.config.log.outputToFiles to
true. For all ESX versions,
setting the option
das.config.log.maxFileNum to
1 will disable the log-file
rotations. The location of log
files can be changed using the
option das.config.log.directory.
5.0,
5.1,
5.5

5.0,
5.1,
5.5

das.config.log.maxFileSize

See
das.config.log.maxFileNum

Yes

Clust
er

das.config.log.maxFileSize

Controls the size of each log


file written out by the FDM
file-based logger. Files are 1
MB in size unless this option is
specified. This option is used
in conjunction with
das.config.log.maxFileNum to
control the log history.
Yes

Clust
er

Less Common Options


Caution: These options have a range of subtle effects and should not be used in production
environments unless directed by VMware Support.

Versi Name
on

Description

Reconfigura Type
tion
of
Opti
on

Cluster Configuration
5.0,
5.1,
5.5

vpxd.das.aamMemoryLimit

Memory limit in MB for


the resource pool used
by HA (the aam
resource pool). If
unspecified, 100 MB is
used. Value applies to
all clusters in the
vCenter Server
inventory.

Yes. HA must VC
be
reconfigured
on all hosts
for which the
change is
required.

5.0,
5.1,
5.5

vpxd.das.electionWaitTimeSec

How long does vCenter


Server wait in seconds
after sending the host
list to a new host for
vCenter Server to learn
the outcome of the
election. A timeout
exception is thrown if

No. Applied
VC
the next time
a FDM is
configured.

35 | P a g e

the host is not a


master or connected
slave by the timeout. If
not specified, a value
of 120 seconds is used.
The value can not
exceed 2000 as it
causes failures of HA.
5.0,
5.1,
5.5

fdm.nodeGoodness

When a master
election is held, the
FDMs exchange a
goodness value, and
the FDM with the
largest goodness value
is elected master. Ties
are broken using the
host IDs assigned by
vCenter Server. This
parameter can be used
to override the
computed goodness
value for a given FDM.
To force a specific host
to be elected master
each time an election
is held and the host is
active, set this option
to a large positive
value. This option
should not be specified
at the cluster level.

No. The new fdm


goodness
value is used
in the next
election.

5.0,
5.1,
5.5

vpxd.das.sendProtectListIntervalSec

Minimum time (in


seconds) between
consecutive calls by
vCenter Server to the
HA master agent (it is
in contact with) to
request that it protect
a new virtual machine.
If not specified, 60s is
used. This option also
controls how
frequently vCenter
Server sends the
master updates to the
virtual machine to host
compatibility
information for virtual
machines that are
powered on when their
compatibility with
hosts changes.

Yes. vCenter VC
Server needs
to be
restarted
after setting
this option.

5.5

fdm.cluster.vsanDatastoreLockDelay

The delay (in seconds)


before the vsan
datastore object is
"acquired". Failover of
virtual machines on a
datastore do not take
place until the vSan

No. The
fdm
value is read
when the
master is
elected.

36 | P a g e

datastore has been


acquired by the Master.
The delay gives time
for the isolated or
partitioned slave to
communicate its
powered on virtual
machines to avoid
duplicate power ons.
The default is to wait
30 seconds, and only if
there are heartbeat
datastores defined.
Admission Control
5.0,
5.1,
5.5

vpxd.das.slotMemMinMB

vCenter Server-wide
default value in MB to
use for memory
reservation if no
memory reservation is
specified for a virtual
machine. Setting the
cluster option
das.vmMemoryMinMB
for a cluster will
override this value for
that cluster. If this
option is not set, a
value of zero is
assumed unless
overridden by
das.vmMemoryMinMB.

No. The value VC


is taken into
account the
next time
admission
control is
done.

5.0,
5.1,
5.5

vpxd.das.slotCpuMinMHz

vCenter Server-wide
default value in MHz to
use for cpureservation
if no CPU reservation is
specified for a virtual
machine. Setting the
cluster option
das.vmCPUinMHz for a
cluster will override
this value for that
cluster. If this option is
not set, a value of 32 is
assumed unless
overridden by
das.vmCPUinMHz.

No. The value VC


is taken into
account the
next time
admission
control is
done.

Detecting Failures
5.0,
5.1,
5.5

das.config.fdm.hostTimeout

37 | P a g e

Controls the time in


seconds a master FDM
waits in seconds for a
slave FDM to respond
to a heartbeat before
declaring the slave
host is not connected
and initiating the work
flow to determine
whether the host is
dead, isolated, or
partitioned. If not
specified, 10s is used.

Yes.
Reconfigure
HA on all
hosts.

Clust
er

5.0,
5.1,
5.5

fdm.deadIcmpPingInterval

ICPM pings are used to


determine whether a
slave host is network
accessible when the
FDM on that host is not
connected to the
master. This option
controls the interval
(expressed in seconds)
between pings. If not
specified, 10s is used.

In ESXi 5.0,
Clust
after making er
a change, HA
must be
reconfigured
on all hosts
in the cluster.
In 5.1 and
later, No

5.0,
5.1,
5.5

das.config.fdm.icmpPingTimeout

Defines the time an


FDM waits in seconds
for an ICMP ping reply
before assuming the
host being pinged is
not network
accessible. If not
specified, 5s is used.

In ESXi 5.0,
Clust
after making er
a change, HA
must be
reconfigured
on all hosts
in the cluster.
In 5.1 and
later, No

5.0,
5.1,
5.5

vpxd.das.heartbeatPanicMaxTimeout

This option impacts


how long it takes for a
host impacted by a
PSOD to release file
locks and hence allow
HA to restart virtual
machines that were
running on it. If not
specified, 60s is used.
HA sets the host
Misc.HeartbeatPanicTi
meout advanced
option to the value of
this HA option. The HA
option is in seconds.

Yes, after
setting the
option, HA
needs to be
reconfigured
on all hosts
in all HA
clusters.

VC

Restarting virtual machines


5.0,
5.1,
5.5

das.config.fdm.policy.unknownStateMonit Defines the number of No


orPeriod
seconds the HA master
agent waits after it
detects that a virtual
machine has failed
before it attempts to
restart the virtual
machine. If not
specified, 10s is used.

Clust
er

5.0,
5.1,
5.5

das.perHostConcurrentFailoversLimit

Clust
er

38 | P a g e

The number of
No
concurrent failovers a
given FDM will have in
progress at one time.
Setting a larger value
will allow more virtual
machines to be
restarted concurrently
but will also increase
the average latency to
power each on since a
greater number adds
more stress on the
hosts and storage. The
default value is 32.

This value was


determined empirically
to provide the
minimum overall
latency.
Virtual machine operation coordination
5.0,
5.1,
5.5

das.config.fdm.ft.cleanupTimeout

When a vSphere Fault No


Tolerance virtual
machine is powered on
by vCenter Server,
vCenter Server informs
the HA master agent
that it is doing so. This
option controls how
many seconds the HA
master agent waits for
the power on of the
secondary virtual
machine to succeed. If
the power on takes
longer than this time
(most likely because
vCenter Server has lost
contact with the host
or has failed), the
master agent will
attempt to power on
the secondary virtual
machine. If the option
is not specified, 900s is
used.

Clust
er

5.0,
5.1,
5.5

das.config.fdm.storageVmotionCleanupTi When a storage


No
meout
vmotion is done in an
HA enabled cluster
using pre 5.0 hosts and
the home datastore of
the virtual machine is
being moved, HA may
interpret the
completion of the
storage vmotion as a
failure, and may
attempt to restart the
source virtual machine.
To avoid this issue, the
HA master agent waits
the specified number
of seconds for a
storage vmotion to
complete or fail. When
the storage vmotion
completes or the timer
expires, the master will
assess whether a
failure occurred. If the
option is not specified,
900s is used for the
timeout.

Clust
er

Reporting
5.0,

das.config.log.outputToFiles

39 | P a g e

Enable the FDM file-

Yes

Clust

5.1,
5.5

based logger for


ESXi 5.0 and later
hosts. 5.0 host log to
the ESX syslog and so
file-based logging is
not enabled by default.
This option has no
affect on pre-5.0 hosts.
To enable the filebased logger, set
das.config.log.outputTo
Files to true and
das.config.log.maxFile
Num to a number
greater than 2. To
disable file-based
logging, set this option
to false.

er

Clust
er

5.0,
5.1,
5.5

das.config.log.directory

Sets the directory used Yes


by the FDM file-based
logger. If not specified,
files are written
into/var/log/vmware/fd
m. See the option
das.config.log.maxFile
Num for more
information.

5.0,
5.1,
5.5

das.config.fdm.stateLogInterval

Frequency in seconds a
FDM logs a summary
of the cluster state. If
not specified, 600s (10
min) is used.

In ESXi 5.0 - Clust


Yes, HA must er
be
reconfigured
on all hosts.
In ESXi 5.1
and later - No

5.0,
5.1,
5.5

das.config.fdm.event.maxMasterEvents

Defines the maximum


number of events
cached by the master.
If not specified, 1000
are cached.

In ESXi 5.0 - Clust


Yes, HA must er
be
reconfigured
on all hosts.
In ESXi 5.1
and later, No

5.0,
5.1,
5.5

das.config.fdm.event.maxSlaveEvents

Defines the maximum


number of events
cached by a slave. If
not specified, 600 are
cached.

In ESXi 5.0
- Yes, HA
must be
reconfigured
on all hosts.
In 5.1 and
later - No

5.0,
5.1,
5.5

vpxd.das.reportNoMasterSec

A vCenter Server
parameter that
determines how long
to wait in seconds
before issuing a cluster
config issue to report
that vCenter Server
was unable to locate
the HA master agent
for the corresponding
cluster. If not specified,
120s is used.

Yes, vCenter VC
Server needs
to be
restarted.

40 | P a g e

Clust
er

Configure Admission Control


After you create a cluster, admission control allows you to specify whether virtual machines
can be started if they violate availability constraints. The cluster reserves resources to allow
failover for all running virtual machines on the specified number of hosts.
The Admission Control page appears only if you enabled vSphere HA.
Procedure
1
In the vSphere Web Client, browse to the vSphere HA cluster.
2
Click the Manage tab and click Settings.
3
Under Settings, select vSphere HA and click Edit.
4
Expand Admission Control to display the configuration options.
5
Select an admission control policy to apply to the cluster.
Option

Description

Define failover capacity by


static number of hosts

Select the maximum number of host failures that you


can recover from or to guarantee failover for. Also, you
must select a slot size policy.

Define failover capacity by


reserving a percentage of
the cluster resources

Specify a percentage of the clusters CPU and Memory


resources to reserve as spare capacity to support
failovers.

Use dedicated failover


hosts

Select hosts to use for failover actions. Failovers can still


occur to other hosts in the cluster if a default failover
host does not have enough resources.

Do not reserve failover


capacity

This option allows virtual machine power-ons that


violate availability constraints.

6
Click OK.

Admission control is enabled and the policy that you chose takes effect.

41 | P a g e

How to Configure VMware High Availability (HA) Cluster


In this article youll learn how to configure VMware High Availability (HA) cluster. Its not in
depth article about HA, but gives you enough knowledge and get things running.
VMware HA components changed in vSphere 5 where the AAM agent (automated availability
manager) responsible for the communication between hosts present in the cluster and
vCenter, has been replaced by FDM (Fault Domain Manager).
VMware High Availability (HA) components
VMware vSphere High Availability (HA) is composed from three main components, which
each of them plays different role.

FDM Fault Domain Manager is responsible for communication between hosts which
are part of the cluster, informing other members about available resource, and VMs

state. FDM manages the restart of VMs in case host fails.


Hostd Agent responsible for communication between hostd and vCenter. If this
agent has a problem, then HA stops functionning. Restart possible from DCUI Under
Troubleshooting Options >Enter > F11 >restart the services OR through the CLI

./sbin/services.sh restart
vCenter Server is used to deploy and configure FDM agents inside the cluster.
Manages the election of server elected master. If the whole vCenter server (or just the

service) fails, the HA still continues to work.


Basic two hosts HA cluster running in
my lab.

To be able to create cluster with ESXi hosts, a vCenter is needed. The most basic
package, vSphere Essentials cannot be used since the limited licensing does not allow you to
create HA cluster as only vSphere Essentials Plus allows you to do that. The essentials
package basicaly allows you manage three hosts from central location which is vCenter
server for essentials.
The Essentials Plus allows you not only create HA cluster, but provides also
vMotion, Enhanced vMotion, and also many other products which are part of the Essentials
Plus bundle:

vSphere Data Protection (backup product)

vSphere Replication (VR) can replicate VMs to another host for DR scenarios

supports VSS) see video.

vShield Endpoint ( AV, Security)


Essentials Plus is an entry level package for small business which needs to consolidate
(virtutalize) like 20-30 physical servers, and make the VMs to be high available. Any

42 | P a g e

hardware failure of the physical hosts enables automatic restart of VMs on another host
in the cluster.
VMware vSphere High Availability Cluster Requirements
There is many requirements for VMware HA. Firs one of them is to have the right VMware
vSphere license, as I mentioned above. Here are other requirements:

vSphere Essentials Plus or higher.

Shared Storage youll need some kind of shared storage. I say some kind, since
you can use dedicated storage device (NAS, SAN) or also you can use other (software
based) products which emulates the shared storage, likeStormagics SvSAN,
VMwares vSphere Storage Appliance for Essentials Plus, OR you can also transform

Windows servers in to a SAN (with Starwind iSCSI SAN).


CPU compatibility between the hosts the ideal cluster is cluster with exactly
same hardware and memory size. Small 3 host cluster allowing to run 20-30 VMs will
satisfy most of SMBs. But you can use VMware EVC to adjust cluster settings for CPU

compatibility.
Once you install vCenter Server, configure network of each of your ESXi hosts, you can start
creating your cluster. Each of your hosts should have redundancy assured by using at least
two physical NICs for each network:

management network

vm network

vMotion network
To make this article shorter, Im skipping the network configuration now. The installation of
vCenter server on Windows Server OS is another piece which is not covered in my article as
you can simply use easy install or you can deploy vCenter Server Appliance (vCSA) read
my detailed article : How-to install vCenter Server Appliance (vCSA) and possibly save on
Microsofts licensing
The vCSA has the advantage that its all-in-one prepackaged product , part of the bundle,
and so no need to install the individual components one by one.
Another requirement for creating VMware HA cluster is solid DNS architecture with forward
and reverse zones created and working. If not already done, create on your DNS server the
necessary records now.
Lets create datacenter and cluster now.
To do so, fire up vSphere Client and go to Hosts and clusters

Then, position yourself on the Manage Tab > right click the vCenter server >New
datacenter

43 | P a g e

Once done, you should see new icon appear. I called my Datacenter vladan. -:). Then again,
right click the datacenter you just create, and create new cluster.

While going through the assistant youre asked, if you want to Turn On DRS andTurn On
HA. If youre on the Essentials Plus licensing, youll get a pop-up saying that Essentials Plus
isnts available, or something like this. As the DRS is available only in Enterprise and
Enterprise Plus.

44 | P a g e

If you dont want to activate those options now, you can leave unchecked, and continue the
assistant.
You can do exactly the same steps by using the vSphere Windows Client, as configuration of
VMware vSphere (HA) cluster is still the base element of VMware, and the new vSphere Web
Client only brings new functions like vSphere Enhanced vMotion or deployment and
management of vSphere Replication.
So we have Datacenter, we have cluster. Now we need to add our ESXi hosts to our
cluster. To do so, just follow those steps, right-click (I like right clicking) the HA cluster we
just created > Add Host
As you can see, my hosts FQDN (fully qualified domain name) is esxi5-01.vladan-fr.local

Youre prompted for a root password on that host.

And also, youll receive a security prompt before validating the assistant

45 | P a g e

The last point is to attach a license. In my case, the license has already been entered in
vCenter server, so I can assign that license to that host. When you first installing your hosts
and vCenter server, you have 60 days for entering your license, and here through this
assistant you wan use the Evaluation Mode license. But after 60 days, the VMs will get
disconnected from vCenter and the HA wont function.

Optionally, to reinforce your companys security, you can prevent login directly to the host
and check Enable lockdown mode. Users will be forced to login only through vCenter.

What is the difference between VMware HA and vMotion?


VMware HA is used in the event when any of the hosts inside a cluster fails then all the
virtual machines running under it are restarted on different host in the same cluster.

46 | P a g e

Now HA is completely dependent on vMotion to migrate the vms to different host so vMotion
is just used for the migration purpose between multiple hosts. vMotion also has the
capability to migrate any vm without interrupting its state to any of the host inside cluster.
What is storage vMotion?

Storage vMotion is similar to vMotion in the sense that something related to the VM
is moved and there is no downtime to the VM guest and end users. However, with
SVMotion the VM Guest stays on the server that it resides on but the virtual disk for
that VM is what moves.

With Storage vMotion, you can migrate a virtual machine and its disk files from one
datastore to another while the virtual machine is running.

You can choose to place the virtual machine and all its disks in a single location, or
select separate locations for the virtual machine configuration file and each virtual
disk.

During a migration with Storage vMotion, you can transform virtual disks from ThickProvisioned Lazy Zeroed or Thick-Provisioned Eager Zeroed to Thin-Provisioned or the
reverse.

Perform live migration of virtual machine disk files across any Fibre Channel, iSCSI,
FCoE and NFS storage

Where to get the HA Status?


NAME
haStatus Obtain configuration and status information of High-availability objects.
SYNOPSIS
haStatus [-help] [ -c clustername ] [ -a | -i ]
DESCRIPTION
haStatus command provides configuration and status information of clusters, nodes in the
cluster, resource groups and resources in a high-availability cluster.
haStatus command uses cluster_mgr command to obtain information.
OPTIONS

47 | P a g e

haStatus takes several options:


-a
-i

Prints detailed configuration and status information of all objects.


Prints detailed configuration information of all objects.

-cclustername
Prints information only about objects in the cluster
clustername.
-help

Prints command usage information.

How to promote or demote the Server in HA config?


Where is HA config file? and log file?
HA configuration is stored in a local on disk database.
/var/log/ This directory contains all the log files. VMwares log files start with letters
vm. The general main log file is messages.
What is DRS?
VMware DRS (Distributed Resource Scheduler) is a utility that balances computing workloads
with available resources in a virtualized environment. The utility is part of a virtualization
suite called VMware Infrastructure 3.
With VMware DRS, users define the rules for allocation of physical resources among virtual
machines. The utility can be configured for manual or automatic control. Resource pools can
be easily added, removed or reorganized. If desired, resource pools can be isolated between
different business units. If the workload on one or more virtual machines drastically changes,
VMware DRS redistributes the virtual machines among the physical servers. If the overall
workload decreases, some of the physical servers can be temporarily powered-down and the
workload consolidated.
Other features of VMware DRS include:

Dedicated infrastructures for individual business units

Centralized control of hardware parameters

Continuous monitoring of hardware utilization

Optimization of the use of hardware resources as conditions change

48 | P a g e

Prioritization of resources according to application importance

Downtime-free server maintenance

Optimization of energy efficiency

Reduction of cooling costs.

How DRS works?


VMware Distributed Resource Scheduler (DRS) is a feature which is available in vCenter to
balance the load on ESX/ESXi Servers.
VMware DRS allocates and balances resources in a DRS cluster. It does this dynamically and
continuously monitors for changes in utilization.
Resource pools are used to allocate resources to a set of virtual machines in a DRS cluster.
When load increases in a VM, DRS will redistribute VMs to other physical servers if required
to ensure all VMs get their correct share of resources.
When a VM is powered on DRS is used to decide which server it is best to be placed on.
If a VM is running and DRS decides that it needs to be placed on another physical server to
ensure its requirements are met, vMotion is used.
This allows the VM to be moved without powering it off or loss of service, allowing resources
to be balanced.
What is VMware DRS and how does it works?

Here DRS stands for Distributed Resource Scheduler which dynamically balances
resource across various host under Cluster or resource pool.

VMware DRS allows users to define the rules and policies that decide how virtual
machines share resources and how these resources are prioritized among multiple
virtual machines.

Resources are allocated to the virtual machine by either migrating it to another


server with more available resources or by making more space for it on the same
server by migrating other virtual machines to different servers.

The live migration of virtual machines to different physical servers is executed


completely transparent to end-users through VMware VMotion

VMware DRS can be configured to operate in either automatic or manual mode. In


automatic mode, VMware DRS determines the best possible distribution of virtual

49 | P a g e

machines among different physical servers and automatically migrates virtual


machines to the most appropriate physical servers. In manual mode, VMware DRS
provides a recommendation for optimal placement of virtual machines, and leaves it
to the system administrator to decide whether to make the change.
Vmware License types?

How to add license?


Adding License Keys
To add licenses:
1. Log in to the vSphere Client.
2. Click Home.
3. Under the Administration section, click the Licensing icon.
4. Click Manage vSphere Licenses.
5. Enter the License Key in the Enter new vSphere license keys field (one per line).
6. Include labels for new license keys as necessary.
7. Click Add License Keys.
After clicking Add License Keys, you can review the license keys you added, capacity
counts, expiration dates, and labels associated with the license keys.
8. Click Next to assign the license keys.
Assigning License Keys
To assign licenses to the vCenter Server or the ESXi host:
1. Log in to the vSphere Client.
2. Click Home.
3. Under the Administration section, click the Licensing icon.
4. Choose Evaluation Mode and expand the list. Find the product you want to license.
5. Right-click on the product and click Change License Key.
6. Assign a key from list that was entered previously on Manage License window.
7. Click OK.
8. Verify that the product is licensed now.
Note: When the cursor is hovered over a license key in the Manage vSphere Licenses wizard
displays a tool tip with all of the asset's information.

50 | P a g e

Removing License Keys


Note: Ensure that the asset such as ESXi host is not currently registered with vCenter
Server. If so, place the ESXi host in maintenance mode and remove from Inventory before
proceeding with the steps.
To remove license keys:
1. Log in to the vSphere Client.
2. Click Home.
3. Under the Administration section, click the Licensing icon.
4. Click Manage vSphere Licenses.
5. Click Next twice.
6. Choose the license key you want to remove.
7. Click Next to proceed to the confirm changes page.You can review your changes on
the Confirm Changes page before applying them to your inventory.
8. Click Finish to apply all of the changes.
Changing License Keys
To license your product with a different license key:
1. Log in to the vSphere Client.
2. Click Home.
3. Under the Administration section, click the Licensing icon.
4. Expand the product you want to change the license for.
5. Right-click on the product and choose Change License Key.
6. Choose the license you want to use to license the product.
Note: From this dialog, you can place a product in Evaluation Mode during the first 60 days.
Licensing a standalone ESXi host
To license a standalone ESXi 5.x (vSphere Hypervisor):
1. Log in to the ESXi host using vSphere Client.
2. Click the Configuration tab.
3. Click Licensed Features under Software.
4. Click Edit under Licensed Features.
5. Select Assign a new license key to this host.
6. Press Enter and enter the License Key.

51 | P a g e

7. Click OK.
Licensing vCenter Server
To license vCenter Server 5.x:
1. Log in to the vSphere Client.
2. Click Home.
3. Under the Administration section, click vCenter Server settings.
4. Select Assign a new license key to this vCenter Server and click OK.
5. Enter the license key for the vCenter Server and, if necessary, include labels.
6. Click Next and Finish.
How to config cluster?
How to network load balancing?
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1006778
NIC teaming in ESXi and ESX
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1004088
Vmware path selection?
To change the default path selection policy for any new storage for a Storage Array Type
Plug-in (SATP):

Log into the ESXi/ESX host.

To check the existing path selection policy:

In ESXi 5.x:
Run one of these commands:
# esxcfg-info | grep -A1 "Default Path Selection Policy"
# esxcli storage nmp satp list

Run this command to change the default pathing policy:

In ESXi 5.x:
# esxcli storage nmp satp set --default-psp=policy --satp=your_SATP_name

52 | P a g e

Where policy is:


VMW_PSP_MRU for Most Recently Used mode
VMW_PSP_FIXED for Fixed mode
VMW_PSP_RR for Round Robin mode

Reboot the ESXi/ESX host to apply the changes.

To get the current SATP, use one of these options:


Run this command:
In ESXi 5.x
# esxcli storage nmp satp list
Follow step 4 in the vSphere Client section under ESXi 5.x and ESX 4.x in Obtaining LUN
pathing information for ESX or ESXi hosts (1003973).
Note: By default, VMware ESX uses the recommended failover path policy for the storage
array connected. If the configured policy is not listed for the SAN array's entry in the
Hardware Compatibility List, you may experience problems.
Warning: Changing the default pathing policy for a specific SATP when there are multiple
storage arrays using the same plug-in can cause other issues, such as incorrect pathing
policies and unexpected storage failover results.
swap size?
Tuning ESXi Host Memory Configuration
This post aims to cover the Tune ESXi Host Memory Configuration objective in the VCAPDCA blueprint. Memory management and configuration is a huge subject, which cant be
covered in a single post, so the aim here is to cover some of the main configuration options
and features relating to memory management, hopefully with some useful examples. Ill also
include some links to some useful documents on memory management and configuration.

Memory Management Techniques


There are a number of methods through which an ESXi host can reduce the amount of
physical memory allocated to a virtual machine.

Page Sharing: ESXi is able to share memory pages between virtual machines,
eliminating redundant pages.
Ballooning: ESXi can use ballooning to force a VM to give up memory pages that the
guest OS considers least valuable. VMtools is required as it includes the vmmemctl

53 | P a g e

module which makes ballooning possible. The guest OS must also be configured with
sufficient swap space.
Memory Compression: If there is a danger of host level swapping, then ESXi will

use memory compression to reduce the number of pages that it needs to swap out.
Swap to Host Cache: If compression doesnt reduce memory usage sufficiently,

ESXi will reclaim memory by swapping memory pages to the host cache. Host cache is
stored on SSD so is faster than regular swapping (where the files are generally stored on
non-SSD devices).
Regular Swapping: If there is no host cache configured, or it is full ESXi will swap
out pages to the virtual machine swap file. If this occurs then there is likely to be severe
performance degradation.

Virtual Machine Swap Files


When a virtual machine is powered on, the host creates a swap file for the virtual machine.
The size of this swap file is equal to the difference between the virtual machines configured
memory and the memory reservation for the VM (if one is set).
By default, virtual machine swap files are created in the virtual machines working directory
(the same location as the .vmx file). However, it is possible to change the default location.
The first thing to do is to configure the cluster swap file setting:

Bear in mind, that if you choose to store the swapfiles in a specified datastore, rather than
with the virtual machines, then vMotion performance can be degraded. For example, you
may choose to store swapfiles on local datastores.
After updating the cluster settings you need to configure the host setting. This is found
under the Configuration tab for the host. Click on Virtual Machine Swapfile Location:

54 | P a g e

Again, by default the swap files are created in the virtual machines working directory. To
configure a specific swapfile location click Edit:

After choosing the datastore to store swap files on, click ok. This setting is specific to the
host, so you will need to make the change on all hosts in the cluster.
Swap file location can be overridden on a per virtual machine basis by setting the option in
the virtual machines settings:

VMX Swap Files


Along with the swap files detailed above, with ESXi 5 a second swap file is used for every
virtual machine (that has a guest hardware version of 7 or above). This swap file is
dedicated for memory overhead for the virtual machine and is used when the host is is
under memory constraint. You can see this swap file in the data store browser, prefixed with
vmx:

The amount of memory overhead required is determined by a number of different factors


including the number of vCPU allocated, the amount of RAM, and whether 3D support is
enabled.

55 | P a g e

Host Cache Configuration


Datastores that are created on SSD storage devices can be used to allocate space for host
cache. You can read a lot more about this feature here.
To configure the host cache, go to the hosts configuration tab in the vSphere client and click
Host Cache Configuration. You should see any existing datastores that reside on SSD
storage. If you have yet to create a datastore on your SSD storage you can from this pane by
clicking Add Storage.

To configure Host Cache, select a datastore and click properties:

After clicking ok, and refreshing the storage, a bunch of .vswp files will be created on the
datastore:

Memory Sharing Configuration


You can use advanced settings to configure the rate that the host scans memory to look for
redundant pages. The following two advanced settings can be changed:

56 | P a g e

Mem.ShareScanTime
Mem.ShareScanGHz

You can also disable memory sharing on a per-virtual machine basis by setting the following
setting to false:

sched.mem.pshare.enable

This setting is found under Configuration Parameters in the advanced settings for a virtual
machine:

Configuring Memory Compression


Memory compression is enabled by default, however it can be disabled by changing the
following advanced setting:

Mem.MemZipEnable (change to enable/disable memory compression)


Mem.MemZipMaxPct (change to set the maximum percentage of a vms memory
that can be compressed default value is 10)

57 | P a g e

Memory Tax for Idle Virtual Machines


Idle virtual machine memory can be reclaimed using the balloon driver. The host will identify
the virtual machines with the largest amounts of idle memory and begin to reclaim. You can
change the idle memory tax rate using the Mem.IdleTax setting.
VMKernel Memory
The Mem.MinFreePct advanced setting is used to set the minimum percentage value for
host much host memory we wish to keep free.

Prior to vSphere 5 this value was always set at 6%, however with hosts with a large amount
of memory this meant that a lot of RAM was being unnecessarily reserved for VMkernel
tasks. For example, for a host with 256 GB ram, 30 GB would be reserved.
With vSphere 5, this value is more of a sliding scale. For example rather than always leaving
6% free, for hosts with 4 12 gigabytes of RAM, 4% is kept free for the VMkernel, and for
hosts with more 12 GB ram, 2 % of RAM is kept available.
Whatever the value is set as defines when the host begins to reclaim memory using
ballooning or swapping. Within the free memory, there are a number of thresholds at which
the host will use different methods to reclaim memory. Using figures from vSphere 4 the
following will take place:

58 | P a g e

6
4
2
1

percent
percent
percent
percent

free
free
free
free

(High): Begin ballooning.


(Soft): Ballooning and begin compressing virtual memory.
(Hard): VM swapping.
(Low): No new pages are provided to virtual machines.

You can read more about this concept here.

Memory Management Best Practices


VMware highlight a number of best practices in this document, including:

Do not disable the page sharing or balloon driver.

Set appropriate reservations.

Host memory should be larger than guest memory usage.

Set an appropriate virtual machine memory size.


What is balloning?
ESXi can use ballooning to force a VM to give up memory pages that the guest OS

considers least valuable. VMtools is required as it includes the vmmemctl module which
makes ballooning possible. The guest OS must also be configured with sufficient swap
space.
Thin provisioning?
Installing ESXiserver in blade server?
Installing ESXi- boot from SAN
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=2052329
What is SRM?
VMware vCenter Site Recovery Manager 5.5 adds the following new features and
improvements.

Use Storage DRS and Storage vMotion on sites that SRM protects:
vSphere Replication supports movement of virtual machines by Storage DRS
and Storage vMotion on the protected site. See Using SRM with vSphere
Replication on Sites with Storage DRS or Storage vMotion.
o Array-based replication supports movement of virtual machines by Storage
DRS and Storage vMotion within a consistency group. See Using SRM with
Array-Based Replication on Sites with Storage DRS or Storage vMotion.
Preserve multiple point-in-time (PIT) images of virtual machines that are protected
with vSphere Replication. See Replicating a Virtual Machine and Enabling Multiple
Point in Time Instances.
o

Protect virtual machines that reside on VMware vSphere Flash Read Cache storage.
vSphere Flash Read Cache is disabled on virtual machines after recovery.

How to config SRM?


What is FT?

59 | P a g e

VMware Fault Tolerance provides continuous availability to applications running in a


virtual machine, preventing downtime and data loss in the event of server failures.

VMware Fault Tolerance, when enabled for a virtual machine, creates a live shadow
instance of the primary, running on another physical server.

The two instances are kept in virtual lockstep with each other using VMware
vLockstep technology

The two virtual machines play the exact same set of events, because they get the
exact same set of inputs at any given time.

The two virtual machines constantly heartbeat against each other and if either virtual
machine instance loses the heartbeat, the other takes over immediately. The
heartbeats are very frequent, with millisecond intervals, making the failover
instantaneous with no loss of data or state.

VMware Fault Tolerance requires a dedicated network connection, separate from the
VMware VMotion network, between the two physical servers.

What is difference between HA and FT?


The key difference between VMware's Fault Tolerance (FT) and High Availability (HA)
products is interruption to virtual machine (VM) operation in the event of an ESX/ESXi host
failure. Fault-tolerant systems instantly transition to a new host, whereas high-availability
systems will see the VMs fail with the host before restarting on another host.
VMware High Availability
VMware High Availability should be used to maintain uptime on important but non-missioncritical VMs. While HA cannot prevent VM failure, it will get VMs back up and running with
very little disturbance to the virtual infrastructure. Consider the value of HA for host failures
that occur in the early hours of the morning, when IT is not immediately available to resolve
the problem.
In addition to tending to VMs during ESX/ESXi host failure, VMware High Availability can
monitor and restart a VM, ensuring the machine is capable of restarting on a new host with
enough resources.
VMware Fault Tolerance
VMware vSphere Fault Tolerance has been around since 2009. If your company cannot
withstand downtime for end users, VMware FT or a similar tool is required. Don't use FT for
load balancing -- its role is protecting VMs when an ESX server goes down.
How does VMware FT work?

60 | P a g e

VMware FT instantly moves VMs to a new host via vLockstep, which keeps a secondary
VM in sync with the primary, ready to take over at any second, like a Broadway
understudy. The VM's instructions and instruction sequence are the actor's lines, which
pass to the understudy on a dedicated server backbone network. Heartbeats ping
between the star and understudy on this backbone as well, for instantaneous detection of
a failure.
How and when to use VMware FT
So your company's IT resources are mission-critical, and unplanned downtime is out of
the question. Ramp up fault tolerance tools and you're done right? Not so fast. VMware FT
has stringent hardware requirements to take into account when requisitioning server
hardware. Before you plan a fault-tolerant virtualized environment, check out your
options for when and where to use FT.
How do I turn it on FT?
The feature is enabled on a per virtual machine basis. Instructions for enabling Fault
Tolerance can be found in the Turning on Fault Tolerance for Virtual Machines section of
the vSphere Availability Guide for your version of ESXi/ESX.
What happens when I turn on Fault Tolerance?
In very general terms, a second virtual machine is created to work in tandem with the virtual
machine you have enabled Fault Tolerance on. This virtual machine resides on a different
host in the cluster, and runs in virtual lockstep with the primary virtual machine. When a
failure is detected, the second virtual machine takes the place of the first one with the least
possible interruption of service. More specific information about how this is achieved can be
found in the Protecting Mission-Critical Workloads with VMware Fault Tolerance whitepaper.
Why can't I turn Fault Tolerance on?
VMware Fault Tolerance can be enabled on any virtual machine that resides in a cluster that
meets the necessary requirements. If you have difficulty enabling Fault Tolerance for a
specific virtual machine, see The Turn on Fault Tolerance option is disabled (1010631).
How do I turn Fault Tolerance off?
For Instructions on disabling Fault Tolerance, see Disabling or Turning Off VMware FT
(1008026).
How do I tell if my environment is ready for Fault Tolerance?
The VMware SiteSurvey Tool is used to check your environment for compliance with VMware
Fault Tolerance. It can be downloaded from the VMware Shared Utilities page.
Where do I find the product's website?
VMware has a website for Fault Tolerance on the VMware vSphere page.
What happens during a failure?
When a host running the Primary virtual machine fails, a transparent failover occurs to the
corresponding Secondary virtual machine. During this failover, there is no data loss or
noticeable service interruption. In addition, VMware HA automatically restores redundancy
by restarting a new Secondary virtual machine on another host. Similarly, if the host running

61 | P a g e

the Secondary virtual machine fails, VMware HA starts a new Secondary virtual machine on
a different host. In either case there is no noticeable outage.

What is the logging time delay between the Primary and Secondary Fault Tolerance virtual
machines?
The actual delay is based on the network latency between the Primary and
Secondary. vLockstep executes the same instructions on the Primary and Secondary, but
because this happens on different hosts, there could be a small latency, but no loss of state.
This is typically less than 1 millsecond (ms). Fault Tolerance includes synchronization to
ensure that the Primary and Secondary are synchronized.

In a cluster with more than 3 hosts, can you tell Fault Tolerance where to put the Fault
Tolerance virtual machine or does it chose on its own?
You can place the original (or Primary virtual machine). You have full control with DRS or
vMotion to assign it to any node. The placement of the Secondary, when created, is
automatic based on the available hosts. But when the Secondary is created and placed, you
can vMotion it to the preferred host.
What happens if the host containing the Primary virtual machine comes back online (after a
node failure)?
This node is put back in the pool of available hosts. There is no attempt to start or migrate
the Primary to that host.
Is the failover from the Primary virtual machine to the Secondary virtual machine dynamic or
does Fault Tolerance restart a virtual machine?
The failover from the Primary to Secondary virtual machine is dynamic with the Secondary
continuing execution from the exact point where the Primary left off. It happens
automatically with no data loss, no downtime, and little delay. Clients see no interruption.
After the dynamic failover to the Secondary virtual machine, it becomes the new Primary
virtual machine. A new Secondary virtual machine is spawned automatically.
Where are Fault Tolerance failover events logged?
All failover events are logged by vCenter Server.
I encountered an error message that I can't find in the Knowledge Base. Where else should I
check?
The vSphere Availability Guide contains a list of known errors in the Fault
Tolerance Error Messages.
Does Fault Tolerance support Intel Hyper-Threading Technology?
Yes, Fault Tolerance does support Intel Hyper-Threading Technology on systems that have it
enabled. Enabling or disabling Hyper-Threading has no impact on Fault Tolerance.
What happens if vCenter Server is offline when a failover event occurs?
When Fault Tolerance is configured for a virtual machine, vCenter Server need not be online
for FT to work. Even if vCenter Server is offline, failover still occurs from the Primary to the
Secondary virtual machine. Additionally, the spawning of a new Secondary virtual machine
also occurs without vCenter Server.
How many virtual CPUs can I use on a Fault Tolerant virtual machine ?

62 | P a g e

vCenter Server 4.x and vCenter Server 5.x support 1 virtual CPU per protected virtual
machine.
New product features in VMware FT and HA
With a major overhaul to HA in vSphere 5 and murmurs of a soon-to-be-released new
feature, we share some key points to know about VMware FT and HA road maps.
What's new? Faster failover in VMware HA, but no FT for SMP
VMware is planning a new high-availability design for release in 2013, called Virtual Machine
Component Protection. Choosing a VM within a host to vMotion according to specific failover
conditions improves failover.
Unlike HA, VMware FT uses synchronous replication to prevent any service interruption in the
event of a VM failure. Mission-critical applications need fault tolerance, but despite user
interest, FT for symmetric multiprocessing systems (SMP) seems stuck in a VMware preview
purgatory.
High availability in a heartbeat
VMware instituted new intelligence for High Availability in vSphere 5. If the master cluster
becomes unavailable or orphaned from the network, an election process takes over to
prevent false-positive failovers. If a host becomes orphaned from the cluster in vSphere 5's
HA, the storage network is available as backup. The admin can choose their heartbeat data
stores in the HA Clustering dialog boxes.
Goodbye Legato, hello Fault Domain Manager
VMware also revamped the HA architecture in vSphere 5. Fault Domain Manager (FDM) took
over for Legato Automated Availability Manager software, which was frustratingly complex.
Now, admins have one master server, with all other servers in the HA cluster waiting in the
wings to help in the event of a failure. If you're switching to vSphere 5 from an older version,
make sure you have at least two shared data stores between all hosts in the HA cluster.
What other changes can you expect? Heartbeats, simpler log and configuration files, and
installs in under a minute, thanks to FDM.
The nitty gritty of vSphere 5's HA and FT setup
With the move from Legato to FDM comes major HA architecture changes, even if the "look
and feel" will be familiar to legacy users. Learn the responsibilities of Master and Slave hosts
in a cluster. This tip also covers important tips for using FT now that it is properly compatible
with VMware's Distributed Resource Scheduler (DRS).
Difference between HA and DRS?
Difference between FT and SRM ?
Cluster maximum?
What is a snapshot?

63 | P a g e

A snapshot preserves the state and data of a virtual machine at a specific point in time.

The state includes the virtual machines power state (for example, powered-on,
powered-off, suspended).

The data includes all of the files that make up the virtual machine. This includes
disks, memory, and other devices, such as virtual network interface cards.

A virtual machine provides several operations for creating and managing snapshots and
snapshot chains. These operations let you create snapshots, revert to any snapshot in the
chain, and remove snapshots. You can create extensive snapshot trees.
In VMware Infrastructure 3 and vSphere 4.x, the virtual machine snapshot delete
operation combines the consolidation of the data and the deletion of the file. This caused
issues when the snapshot files are removed from the Snapshot Manager, but the
consolidation failed. This left the VM still running on snapshots, and the user may not
notice until the datastore is full.
In vSphere 4.x, an alarm can be created to indicate if a virtual machine was running in
snapshot mode. For more information, see Configuring VMware vCenter Server to send
alarms when virtual machines are running from snapshots (1018029).
In vSphere 5.0, enhancements have been made to the snapshot removal. In vSphere 5.0,
you are informed via the UI if the consolidation part of a RemoveSnapshot or
RemoveAllSnapshots operation has failed. A new option, Consolidate, is available via the
Snapshot menu to restart the consolidation.
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1015180
How to change the default snapshort location in Vmware Esxi 5?
Be default the snapshots which are taken for any vm are stored with their parent in the
same directory or storage. Sometimes you may run out of space and you might not be able
to take anymore snapshots so in that case you can always use some other location for the
storage of snapshots.
But how will you change the default locations of all the snapshots which will be taken for any
vm ?
These are the required steps to be taken:
NOTE: Please ensure that the vm you are working on is powered OFF.
Right Click the vm and select Edit Settings

64 | P a g e

Click on Options from the top TAB, select General and open the Configuration
parameters

Add a new row with the following details


snapshot.redoNotWithParent
Save this parameter with a value "true" as shown below

Now open the CLI of the host where the vm is located


Go to the vm's parent directory where all the vm files are stored and open the main .vmx file
As in my case
# cd /vmfs/volumes/50925c85-54a206c1-a9e5-d4ae526b9890/test_XP
# vi test_XP.vmx
Now add this line anywhere in the .vmx file with the path location where you want your
snapshots to be stored
workingDir = "/vmfs/volumes/50925be7-ea8ab367-d40d-d4ae526b9890/snapshots"
Save the file and exit
Now you need to reload this vm to make the changes take affect.
# vim-cmd vmsvc/getallvms | grep test_XP
56 test_XP [iSCSI-Datastore15] test_XP/test_XP winXPProGuest vmx-07
Here 56 is the vm id which you can find out using the above command
# vim-cmd vmsvc/reload 56
Now when you take snapshots the snapshot files and vm swap files will be created in a
different location.

65 | P a g e

How to redirect vm's swap file


In case you do not want vm swap file to be redirected to another location and you want it to
the same parent directory.
Add an extra parameter in the Configuration Parameter option shown above
sched.swap.dir="<path_to_vm_directory>"
For example
/vmfs/volumes/50925be7-ea8ab367-d40d-d4ae526b9890/vmswap
Save the settings and exit. Now each time you take snapshot the snapshot files and vm
swap files will be saved at specified different location.
How to link the vcenters?
ESXi log files ?
Documentation contents referenced below for vSphere 5.1 and 5.5 are the same and can be
used interchangeably.
You can review ESXi 5.1 and 5.5 host log files using these methods:

From the Direct Console User Interface (DCUI). For more information, see About the
Direct Console ESXi Interface in the vSphere 5.5 Installation and Setup Guide.

From the ESXi Shell. For more information, see the Log In to the ESXi Shell section in
the vSphere 5.5 Installation and Setup Guide.

Using a web browser at https:// HostnameOrIPAddress/host. For more information,


see the HTTP Access to vSphere Server Filessection.

Within an extracted vm-support log bundle. For more information, see Export System
Log Files in the vSphere Monitoring and Performance Guide or Collecting diagnostic
information for VMware ESX/ESXi using the vm-support command (1010705).

From the vSphere Web Client. For more information, see Viewing Log Files with the
Log Browser in the vSphere Web Client in the vSphere Monitoring and Performance
Guide.

ESXi 5.1 Host Log Files


Logs for an ESXi 5.1 host are grouped according to the source component:

/var/log/auth.log: ESXi Shell authentication success and failure.

/var/log/dhclient.log: DHCP client service, including discovery, address lease requests


and renewals.

/var/log/esxupdate.log: ESXi patch and update installation logs.

/var/log/lacp.log: Link Aggregation Control Protocol logs.

/var/log/hostd.log: Host management service logs, including virtual machine and host
Task and Events, communication with the vSphere Client and vCenter Server vpxa
agent, and SDK connections.

66 | P a g e

/var/log/hostd-probe.log: Host management service responsiveness checker.

/var/log/rhttpproxy.log: HTTP connections proxied on behalf of other ESXi host


webservices.

/var/log/shell.log: ESXi Shell usage logs, including enable/disable and every command
entered. For more information, seevSphere 5.5 Command-Line
Documentation and Auditing ESXi Shell logins and commands in ESXi 5.x (2004810).

/var/log/sysboot.log: Early VMkernel startup and module loading.

/var/log/boot.gz: A compressed file that contains boot log information and can be
read using zcat /var/log/boot.gz|more.

/var/log/syslog.log: Management service initialization, watchdogs, scheduled tasks


and DCUI use.

/var/log/usb.log: USB device arbitration events, such as discovery and pass-through


to virtual machines.

/var/log/vobd.log: VMkernel Observation events, similar to vob.component.event.

/var/log/vmkernel.log: Core VMkernel logs, including device discovery, storage and


networking device and driver events, and virtual machine startup.

/var/log/vmkwarning.log: A summary of Warning and Alert log messages excerpted


from the VMkernel logs.

/var/log/vmksummary.log: A summary of ESXi host startup and shutdown, and an


hourly heartbeat with uptime, number of virtual machines running, and service
resource consumption. For more information, see Format of the ESXi 5.0
vmksummary log file (2004566).

/var/log/Xorg.log: Video acceleration.

Note: For information on sending logs to another location (such as a datastore or remote
syslog server), see Configuring syslog on ESXi 5.0 (2003322).
Logs from vCenter Server Components on ESXi 5.1 and 5.5
When an ESXi 5.1 / 5.5 host is managed by vCenter Server 5.1 and 5.5, two components are
installed, each with its own logs:

/var/log/vpxa.log: vCenter Server vpxa agent logs, including communication with


vCenter Server and the Host Management hostd agent.

/var/log/fdm.log: vSphere High Availability logs, produced by the fdm service. For
more information, see the vSphere HA Security section of the vSphere Availability
Guide.

vmware services? how to restart the services?


From the Local Console or SSH:
1. Log in to SSH or Local console as root.

67 | P a g e

2. Run these commands:


/etc/init.d/hostd restart
/etc/init.d/vpxa restart
Alternatively, run this command:
ps -s | grep hostd
You should see output similar to:
456566 2878 hostd-worker WAIT UFUTEX 0-7 hostd
456567 2878 hostd-worker WAIT UFUTEX 0-7 hostd
To reset the management network on a specific VMkernel interface, by default vmk0, run the
command:
esxcli network ip interface set -e false -i vmk0; esxcli network ip interface set -e true -i vmk0
Note: Using a semicolon (;) between the two commands ensures the VMkernel interface is
disabled and then re-enabled in succession. If the management interface is not running
on vmk0, change the above command according to the VMkernel interface used.
To restart all management agents on the host, run the command:
services.sh restart
Caution:

Check if LACP is enabled on DVS for version 5.x and above. For more
information, see vSphere 5.0 Networking Guide.

If LACP is not configured, the services.sh script can be safely executed.

If LACP is enabled and configured, do not restart management services


using services.sh script instead restart independent services
using /etc/init.d/module restart command.

If the issue is not resolved, and you have to restart all the services that are a
part of the services.sh script, take a downtime before proceeding to the script.
Note: For more information about restarting the management service on an
ESXi host, see Service mgmt-vmware restart may not restart hostd in ESX/ESXi
(1005566).

Restarting the Management agents on ESX


To restart the management agents on an ESX host:

1. Log in to your ESX host as root from either an SSH session or directly from the
console.
2. Run this command:
service mgmt-vmware restart
Caution: Ensure Automatic Startup/Shutdown of virtual machines is disabled before
running this command or you risk rebooting the virtual machines. For more
information, see Restarting hostd (mgmt-vmware) on ESX hosts restarts hosted

68 | P a g e

virtual machines where virtual machine Startup/Shutdown is enabled


(1003312) and Determining whether virtual machines are configured to autostart
(1000163).
3. Press Enter.
4. Run this command:
service vmware-vpxa restart
5. Press Enter.
6. Type logout and press Enter to disconnect from the ESX host.
how to troubleshoot vmotion?
Validate that each troubleshooting step below is true for your environment. Each step
provides instructions or a link to a document, in order to eliminate possible causes and take
corrective action as necessary. The steps are ordered in the most appropriate sequence to
isolate the issue and identify the proper resolution. Do not skip a step.
1. Ensure that vMotion is enabled on all ESX/ESXi hosts. For more information refer to
Enabling vMotion and Fault tolerance logging (1036145).
Determine if resetting the Migrate.Enabled setting on both the source and destination
ESX or ESXi hosts addresses the vMotion failure. For more information on this issue,
see
vMotion fails at 10% with the error: A general system error occurred: Migration
failed while copying data, Broken Pipe (1013150).
2. Verify that VMkernel network connectivity exists using vmkping. For more
information, see
Testing VMkernel network connectivity with the vmkping
command (1003728).

3. Verify that VMkernel networking configuration is valid. For more information, see
ESX/ESXi power on error: Unable to set VMkernel gateway as there are no VMkernel
interfaces on the same network (1002662) .
4. Verify that the virtual machine is not configured to use a device that is not valid on
the target host. For more information, see
Troubleshooting migration compatibility
error: Device is a connected device with a remote backing (1003780).
5. If Jumbo Frames are enabled (MTU of 9000) (9000 -8 bytes (ICMP header) -20 bytes
(IP header) for a total of 8972), ensure thatvmkping is run like vmkping -d -s 8972
<destinationIPaddress>. You may experience problems with the trunk between two
physical switches that have been misconfigured to an MTU of 1500.
6. Verify that Name Resolution is valid on the host. For more information, see Identifying
issues with and setting up name resolution on ESX/ESXi Server (1003735).
7. Verify that Console OS network connectivity exists. For more information, see Testing
network connectivity with the ping command (1003486).
8. Verify if the ESXi/ESX host can be reconnected or if reconnecting the ESX/ESXi host
resolves the issue. For more information, see KB article Changing an ESXi or ESX
host's connection status in vCenter Server (1003480)

69 | P a g e

9. Verify that the required disk space is available. For more information,
see Investigating disk space on an ESX or ESXi host (1003564) .
10. Verify that time is synchronized across environment. For more information,
see Verifying time synchronization across an ESX/ESXi host environment (1003736).
11. Verify that valid limits are set for the virtual machine being vMotioned. For more
information, see VMware vMotion fails if target host does not meet reservation
requirements (1003791).
12. Verify that hostd is not spiking the console. For more information, see Checking for
resource starvation of the ESX Service Console (1003496).

13. This issue may be caused by SAN configuration. Specifically, this issue may occur if
zoning is set up differently on different servers in the same cluster.
14. Verify and ensure that the log.rotateSize parameter in the virtual machine's
configuration file is not set to a very low value. For more information, see vMotion
fails at 10% with the error: Operation timed out (2007343).

Note: If the issue still exists after trying the steps in this article:

Gather the VMware Support Script Data. For more information, see Collecting
diagnostic information in a VMware Virtual Infrastructure Environment (1003689).

File a support request with VMware Support and note this KB Article ID in the problem
description. For more information, see How to Submit a Support Request.

virtual switch port types?


Types of port binding
These three different types of port binding determine when ports in a port group are
assigned to virtual machines:

Static Binding

Dynamic Binding

Ephemeral Binding

Static binding
When you connect a virtual machine to a port group configured with static binding, a port is
immediately assigned and reserved for it, guaranteeing connectivity at all times. The port is
disconnected only when the virtual machine is removed from the port group. You can
connect a virtual machine to a static-binding port group only through vCenter Server.
Note: Static binding is the default setting, recommended for general use.
Dynamic binding
In a port group configured with dynamic binding, a port is assigned to a virtual machine only
when the virtual machine is powered on and its NIC is in a connected state. The port is
disconnected when the virtual machine is powered off or the virtual machine's NIC is

70 | P a g e

disconnected. Virtual machines connected to a port group configured with dynamic binding
must be powered on and off through vCenter.
Dynamic binding can be used in environments where you have more virtual machines than
available ports, but do not plan to have a greater number of virtual machines active than
you have available ports. For example, if you have 300 virtual machines and 100 ports, but
never have more than 90 virtual machines active at one time, dynamic binding would be
appropriate for your port group.
Note: Dynamic binding is deprecated from ESXi 5.0, but this option is still available in
vSphere Client. It is strongly recommended to use Static Binding for better performance.
Ephemeral binding
In a port group configured with ephemeral binding, a port is created and assigned to a
virtual machine by the host when the virtual machine is powered on and its NIC is in a
connected state. The port is deleted when the virtual machine is powered off or the virtual
machine's NIC is disconnected.
You can assign a virtual machine to a distributed port group with ephemeral port binding on
ESX/ESXi and vCenter, giving you the flexibility to manage virtual machine connections
through the host when vCenter is down. Although only ephemeral binding allows you to
modify virtual machine network connections when vCenter is down, network traffic is
unaffected by vCenter failure regardless of port binding type.
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1022312
how to change the root password in esxi host?
1. Log into the ESXi/ESX host service console, either via SSH or the physical console.
2. If you did not log in as root, you must acquire root privileges by running the
command:
su Enter the current root password when prompted.
3. Change the root password by executing:
passwd root
4. Enter the new root password, and press Enter. Enter the password a second time to
verify. You are warned about, but not prevented from using, bad passwords.
If you make a mistake when typing or retyping the new root password, you must start
over. For example:
# passwd root
Changing password for user root.
New UNIX password:
Retype new UNIX password:
Sorry, passwords do not match
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
#
Not able to access the server via putty?

71 | P a g e

Symptoms

Unable to add the ESXi host to vCenter Server

Cannot connect to the ESXi host using SSH

Connecting to the ESXi host using SSH fails

You see the error:


Cannot contact the specified host (IP Address/hostname). The host may not be
available on the network, a network configuration problem may exist. or the
management service on this host may not be responding.

Cannot connect to the ESXi host using putty

When connecting to the ESXi host using putty, you see the error:
Network error: Connection Refused

You have enabled Tech Support Mode. For more information, see Using Tech Support
Mode in ESXi 4.1 and ESXi 5.0 (1017910).

Cause
This issue may occur if the /etc/inetd.conf file is empty or does not contain the correct
settings for remote shell access and VMware authentication daemon.
Note: In ESXi 5.0, the inetd.conf file is located at /var/run/.
Resolution
To resolve this issue:
1. Connect to the ESXi console directly using a Kernel-based Virtual Machine (KVM) ,
such as iLO, iLOM, DRAC, RSA, or IP KVM, pressALT+F1, and then log in as root.
2. Open the inetd.conf file using a text editor.
To open the file using the vi editor, run this command:
# vi /etc/inetd.conf

3. Ensure that the contents of the /etc/inetd.conf file are similar to:
# Internet server configuration database
# Remote shell access
ssh
stream tcp nowait root /sbin/dropbearmulti dropbear +
+min=0,swap,group=shell -i -K60
ssh
stream tcp6 nowait root /sbin/dropbearmulti dropbear +

72 | P a g e

+min=0,swap,group=shell -i -K60
In ESXi 5.0, the contents under Remote shell access appear similar to:
ssh stream tcp nowait root /usr/lib/vmware/openssh/bin/sshd sshd +
+swap,group=host/vim/vimuser/terminal/ssh -i
ssh stream tcp6 nowait root /usr/lib/vmware/openssh/bin/sshd sshd +
+swap,group=host/vim/vimuser/terminal/ssh -i
# VMware authentication daemon
authd stream tcp nowait root /sbin/authd
authd stream tcp6 nowait root /sbin/authd

authd
authd

4. Save the changes made to the file.


Note: Alternatively, you can copy the inetd.conf file from a good known ESXi host
using a utility such as WinSCP.

5. Run this command to restart the SSH daemon for the changes to take effect:
# /etc/init.d/TSM-SSH restart
Note: In ESXi 5.x, run this command:
/etc/init.d/SSH restart
Vmware update manager process?
How P2v Works?
http://www.experts-exchange.com/Software/VMWare/A_12358-HOW-TO-P2V-V2V-for-FREEVMware-vCenter-Converter-Standalone-5-5.html
Vmware converter port number?
VMware vCenter Converter fails if one or more required ports are blocked. Follow the section
that matches your conversion scenario.
In this article, the following terms are used:

Source computer

The physical or virtual machine that is being converted.

Converter server

The server portion of VMware vCenter Converter. In a typical


installation, both the Converter server and Converter client are
installed at the same location. By default, this is the installation
method that is used.

Converter client

The client portion of VMware vCenter Converter. In a custom


installation, the Converter client can be installed to a different
computer than the Converter server.

VirtualCenter

The VirtualCenter computer that is being used as the conversion

73 | P a g e

destination, if such was chosen.


ESX

The VMware ESX host that is being used as the conversion


destination, if one is chosen, or the ESX host that is hosting the
target virtual machine.

Fileshare path

The path to a virtual machine's .vmx file, if the source is an existing


or standalone virtual machine, or the path to a directory if the
destination is to be a standalone virtual machine.

Standalone virtual
machine

A virtual machine that is being managed by a VMware product other


than VMware ESX.

Helper virtual machine When converting a powered on Linux operating system (P2V), this is
the target virtual machine that is being used temporarily for the
purpose of copying files from the source computer. It uses the
TCP/IP information that is entered in the Converter wizard for the
target virtual machine. Make sure that this IP address can
communicate directly with the source computer.

Notes:

If you perform a corrective action, determine if the problems initially encountered are
still being experienced.

To test port connectivity, do so from a command or shell prompt. For more


information, see Opening a command or shell prompt (1003892).

To test TCP port connectivity use the telnet command. For more information,
see Testing port connectivity with Telnet (1003487).

To test UDP port connectivity from Linux or MacOS use the traceroute command. For
more information, see a traceroute man page.

To test UDP port connectivity from Windows use the Portqry utility. For more
information, see the Microsoft Knowledge Base article 310099.
Note: These links were correct as of March 15, 2009. If you find a link is broken,
provide feedback and a VMware employee will update the link.

Converting a powered on Windows operating system (P2V)


Source

74 | P a g e

Destination TCP Ports UDP


Ports

Notes

Converter
server

Source
computer

445, 139,
9089
or 9090

137, 138 If the source computer uses NetBIOS, port


445 is not required. If NetBIOS is not being
used, ports 137, 138, and 139 are not
required. If in doubt, make sure that none of
the ports are blocked.
Port 9089 is used for Converter Standalone
versions, and 9090 is used for the Converter
plugin.
Note: Unless you have installed Converter
server to the source computer, the account
used for authentication to the source
computer must have a password, the source
computer must have network file sharing
enabled, and it cannot be using Simple File
Sharing.

Converter
server

VirtualCenter 443, 902

Only required if the conversion target is


VirtualCenter.

Converter
client

Converter
server

443

Only required if a custom installation was


performed and the Converter server and
client portions are on different computers.

Source
computer

ESX/ESXi

443, 902

If the conversion destination is vCenter


Server, only port 902 is required from the
source to the ESX/ESXi hosts.

Converting a powered on Linux operating system (P2V)


Source

Destination

TCP Ports Notes

Converter server

Source computer

22

The Converter server must be able to


establish an SSH connection with the source
computer.

Converter client

Converter server

443

Only required if a custom installation was


performed and the Converter server and
client portions are on different computers.

Converter server

VirtualCenter

443, 902

Only required if the conversion target is


VirtualCenter.

Converter server

ESX/ESXi

443, 902,
903

If the conversion destination is vCenter


Server, only port 902 is required from the
source to the ESX/ESXi hosts.

Converter server

Helper virtual
machine

443

75 | P a g e

Helper virtual
machine

Source computer

22

The helper virtual machine must be able to


establish an SSH connection with the source
computer. By default the helper virtual
machine gets its IP address assigned by
DHCP. If there is no DHCP server available on
the network chosen for the target virtual
machine you must manually assign it an IP
address.

Converting an existing virtual machine (V2V)


Source

Destination TCP Ports UDP


Ports

Converter
server

Fileshare
path

445, 139

Notes

137, 138 This is only required for standalone virtual


machine sources or destinations.

If the computer hosting the source or


destination path uses NetBIOS, port 445 is
not required. If NetBIOS is not being used,
ports 137, 138, and 139 are not required. If
in doubt, make sure that none of the ports
are blocked.
Converter
client

Converter
server

443

Converter
server

VirtualCenter 443, 902

Only required if the target is VirtualCenter.

Converter
server

ESX/ESXi

If the conversion destination is vCenter


Server, only port 902 is required from the
source to the ESX/ESXi hosts.

443, 902

Only required if a custom installation was


performed and the Converter server and
client portions are on different computers.

Vmware converter requirements?


Vmware converter steps?
how to config vmsafe? XXXXXXX
difference between VI and vpshere client?XXXXXX
vcenter services?
How to increase the c: drive size?
Caution: VMware strongly recommends that you have backups in place before performing
any disk partition operation. Also make sure the virtual machine has no snapshots, before
starting to extend the VMDK. If the virtual machine has snapshots use "Delete all" from the
Snapshot Manager to commit them. Verify again in the Snapshot Manager, in the Edit
Settings and the virtual machine datastore that the snapshots were committed.
To expand VMDK and extend a partition:

76 | P a g e

1. Log into the VMware ESX/ESXi host as the root user. Verify that the virtual machine
does not have any snapshots by going into the virtual machine's directory and
looking for Delta files. Run the command:
#ls -lah /vmfs/volumes/datastore_name/vm_name/*delta*
-rw------- 1 root root 1.8G Oct 10 10:58 vm_name-000001-delta.vmdk
Note: For more information on logging into the ESX/ESXi, see the following:

For more information on VMware ESX Service Console, Unable to connect to


an ESX host using Secure Shell (SSH) (1003807).

For more information on VMware ESXi Technical Support Mode, Tech Support
Mode for Emergency Support (1003677).

For more information on VMware ESXi 4.1 and ESXi 5.0 Technical Support
Mode, Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).

2. If the virtual machine does have snapshots, commit them using these commands:
#vmware-cmd -l /vmfs/volumes/datastore_name/vm_name/vm_name.vmx
#vmware-cmd
/vmfs/volumes/datastore_name/vm_name/vm_name.vmx removesnapshotsremovesn
apshots() = 1
Note: For committing snapshots on an ESXi 5.1 or later host, see Committing
snapshots on ESXi host from command line (1026380).

3. Power off the virtual machine.


Note: The virtual machine can be powered on to increase the vmdk size in steps 4
and 5. However ensure to power off the virtual machine after increasing VMDK size.
4. To expand the VMDK using the VI Client (if the option exists), edit the settings of the
virtual machine and click the hard disk you want to expand.
5. Enter a new value in the New Size field:

To expand the VMDK using the vmkfstools -X command, run the command:
#vmkfstools -X <New Disk Size> <VMDK to extend>
#vmkfstools -X 30G /vmfs/volumes/datastore_name/vm_name/vm_name.vmdk
Note: Ensure that you point to the vm_name.vmdk, and not to the vm_nameflat.vmdk. Using vmkfstools -X is the only option to expand an IDE virtual disk .

77 | P a g e

6. To extend the C: partition, find a helper virtual machine and attach the disk from the
first virtual machine to the helper.
To add an existing virtual disk to the helper virtual machine:
a. Go to the Edit Settings menu of the virtual machine.
b. Click Add > Hard Disk > Use Existing Virtual Disk.
c. Navigate to the location of the disk and select to add it into the virtual
machine.
Note: A helper virtual machine is a virtual machine that has the same
operating system to which you attach the disk.

2
3

Start the virtual machine.


Verify the volume in question has been mounted and has been assigned a drive
letter. This can be set in Windows Disk Management or by selecting the volume and
typing assign from within the DiskPart command.
In versions of Windows prior to 2008, open a command prompt and run the DiskPart
command:
C:\Documents and Settings\username>diskpart
Microsoft DiskPart version 5.1.3565
Copyright (C) 1999-2003 Microsoft Corporation.
On computer: USERNAME-HELPER-VM
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- -------Volume 0 D CD-ROM 0 B
Volume 1 C NTFS Partition 30 GB Healthy System
Volume 2 E NTFS Partition 10 GB Healthy
DISKPART> select Volume 2
Volume 2 is the selected volume.
DISKPART> extend disk=2
DiskPart successfully extended the volume.
DISKPART> exit
Leaving DiskPart...
Note: Where 2 above is the disk volume number of the volume to extend.
Note: Ensure to choose the correct volume. The Size is the old value.
Note: If you are in Windows 2003, and you see the error The volume you have
selected may not be extended. Please select another volume and try again, see the
Microsoft Knowledge Base article 841650.

In Windows 2008, click Start > Computer Management > Disk Manager, rightclick on the partition and select Extend Volume. For more information, see the
Microsoft Knowledge Base article 325590.
Note: The preceding links were correct as of March 14, 2013. If you find a link is

78 | P a g e

broken, provide feedback and a VMware employee will update the link.
5
6

Power off and detach the disk from the helper virtual machine. Keep all default
settings and do not delete the VMDK from the disk.
Power on the first virtual machine and verify the disk size change.

http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1007266
Increase the datastore size?
Symptoms
When you try to grow or expand a VMFS volume, you experience these symptoms:

One or more storage devices have been increased in capacity from the storage array.

When you click Increase, there are no available devices to select.

When you click Increase, a device is listed but does not have Expandable = Yes.

When you select the volume and click Next, you see the error:
failed to update disk partition information

Purpose
This article provides the steps to increase the capacity (grow or extend) a VMFS datastore
successfully.
Note: Increasing the size of the backing storage device on the storage array is outside the
scope of this article, and a prerequisite before modifying the size of the VMFS Datastore
filesystem. VMware vSphere cannot modify the size of a LUN or other storage device on the
array. Modifying the size of an array device must be done using the storage array vendor's
management tools. For more information, contact the storage array vendor.
Note: This method only works for non-Local non-Boot devices. For Local VMFS datastores,
see Growing a local datastore from the command-line in vSphere ESX 4.x
(1009125) or Growing a local datastore from the command line in vSphere ESXi 4.x and 5.x
(2002461).
Resolution
To increase the capacity of a VMFS datastore:
1. In vCenter Server, select the Datastores view.
2. Select the datastore you want to grow and identify the host that has more virtual
machines running on it.

79 | P a g e

3. Open another vSphere client that connects directly to the ESX host.
4. Go to Configuration > Storage adapters and perform a rescan. For more
information, see Performing a rescan of the storage on an ESX/ESXi host (1003988).
5. Go to Configuration > Storage, click the datastore that you want to grow, and
click Properties.
6. Ensure that the new size of the device is listed in the Extent Device list. If the
increased size is not reflected, review the changes on the storage array and rescan
again.
7. Click Increase.
8. Select a device from the list of storage devices for which the Expandable column is
Yes and click Next.
9. Set the capacity for the extent. The default capacity for the extent is the entire free
space on the storage device. VMware recommends you to use the default setting.
10. Click Next.
11. After the process completes, go to vCenter Server, right-click the cluster that sees
the expanded datastore, and click Rescan for Datastores. For more information,
see Performing a rescan of the storage on an ESX/ESXi host (1003988).
12. If there are other hosts that see the expanded datastore, perform a rescan on these
hosts also.
Note: If the LUN experiences a high I/O throughput when growing the VMFS, the ESX host
may not be able to complete the operation. In such a case, repeat the process during nonbusiness hours and when backup operations are not running. If the problem persists, power
off some of the virtual machines residing on the LUN and then retry.
For more information, see Adding an extent to a VMFS volume fails after increasing local
storage space (1002821).
Additional Information
Notes:

If a shared datastore has powered on virtual machines and becomes 100% full, you
can increase the datastores capacity only from the host on which the powered-on
virtual machines are registered.

As an alternative work around, you can free up additional storage space by


performing a Storage vMotion to migrate some of the virtual machines to a datastore
with more available space.

http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1017662
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=2002461
.vmfs version?

80 | P a g e

cloning?
how to remove the host from the cluster?
To move an ESXi/ESX host from one VirtualCenter Server/vCenter Server to another, remove
the host from VirtualCenter Server/vCenter Server, then add the host to a new VirtualCenter
Server/vCenter Server. This operation will not affect the state of any virtual machines
running on the ESXi/ESX, the historical performance data of both the ESXi/ESX and its virtual
machines will however be purged.
Removing the ESXi/ESX host from VirtualCenter Server/vCenter Server
To remove the ESXi/ESX host from VirtualCenter Server/vCenter Server:

1. If the managed host is in a cluster, right-click the cluster. Set the Distributed
Resource Scheduler (DRS) mode to manual anddisable VMware High Availability by
deselecting Configure HA.
2. Click OK and wait for the reconfiguration to complete.
3. Click Inventory in the navigation bar, expand the inventory as needed, and click the
appropriate managed host.
4. Right-click the managed host icon in the inventory panel and choose Disconnect (wait
for the task to complete).
5. Right-click the managed host icon in the inventory panel and choose Remove.
6. Click Yes to confirm that you want to remove the managed host and all its associated
virtual machines.
Adding the ESXi/ESX host to a new VirtualCenter Server/vCenter Server
To add the ESXi/ESX host to a new VirtualCenter Server/vCenter Server:
1. Connect VMware Infrastructure Client/vSphere Client/vSphere Web Client to the new
VirtualCenter Server/vCenter Server.
2. Click Inventory in the navigation bar.
3. Expand the inventory as needed, and click the appropriate datacenter or cluster.
4. Click File > New > Add Host.
5. In the first page of the Add Host wizard, enter the name or IP address of the managed
host in the Host name field.
6. Enter the username and password for a user account that has administrative
privileges on the selected managed host.
7. Click Next.

how to troubleshoot PSOD?


we received lot of Purple Screen of death(PSOD) on ESXi host 5.5. When analyzed the logs
and googled related to the error, It comes to know that it is a known issue with hpsa (HP

81 | P a g e

ProLiant Smart Array Controller ) driver affecting ESXi 5.5. It is causing memory leak
associated with device rescans resulting in out of memory conditions and a potential PSOD.
HP has released the latest version of HPSA. HP has released the latest version of HPSA driver
(5.5.0.60-1) for vSphere 5.5.

Below issues were addressed as part of this latest HPSA driver version

Fixed a memory leak associated with device rescans resulting in out of memory
conditions and a potential PSOD.

Fixed a null pointer dereference in error handling code that can cause a PSOD in rare
cases when device inquiries fail.

Restore LUN numbering policy to start with 1 instead of 0, avoiding potential issues
with Raw Device Maps.

Enable 64bit DMA mapping instead of default 32bit mapping.

Improve null pointer checks in device rescanning code, avoiding a potential PSOD.

Restore maximum outstanding command count, removing artificial limitation that


could impact performance.

Restore support for legacy HP Smart Array P700m controller.


To identify the current version of HPSA driver installed with your ESXi 5.5 host, execute the
below command:
~ # vmkload_mod -s hpsa | grep Version
Version: Version 5.5.0.58-1OEM, Build: 1331820, Interface: 9.2 Built on: Dec16
2013
Driver Version: HP HPSA Driver (v 5.5.0.58-1OEM) meant the affected software driver was
used.
How to Install the latest HPSA driver for ESXi 5.5:

82 | P a g e

Download the HPSA driver vib file (scsi-hpsa-5.5.0.60-1OEM.550.0.0.1331820.x86_64.vib) for


vSphere 5.5 and upload it in your ESXi host datastore
1.Place your ESXi host into Maintenance Host
2.Install the hpsa driver VIB using the below following command:
esxcli software vib install -v /tmp/scsi-hpsa-5.5.0.601OEM.550.0.0.1331820.x86_64.vib

3.Reboot the ESXi host for the changes to take effect


4.Verify the HPSA driver version after the installation using the below command:
~ # vmkload_mod -s hpsa | grep Version
Version: Version 5.5.0.60-1OEM, Build: 1331820, Interface: 9.2 Built on: May 15
2014
5.Exit your ESXi host from the Maintenance mode
http://www.vmwarearena.com/2014/07/troubleshoot-psod-on-esxi-5-5-due-to-hpsa-driver-55-0-58-1.html
******https://communities.vmware.com/thread/472795
what is WWN? and WWPN?
usually use the WWPN (World Wide Port Name) which is assigned to a single port. With a
single port HBA you have one WWN/WWNN (World Wide Node Name), and a single WWPN,
and for multi-port cards you will still have a single WWN/WWNN but multiple WWPNs, which
may be used for different zones.
HOW TO: Find HBA WWN number on VMware vSphere ESX server
There are several ways to get HBA WWNs on VMware vSphere ESX/ESXi host:
1. vSphere Client;
2. Using ESXi Shell;
3. Using Powershell / PowerCLI script.

83 | P a g e

1. Connect to a server or vCenter, open server Configuration tab, under Hardware select
Storage Adapters:

You can also copy WWNN (World Wide Node Name) and WWPN (World Wide Port Name)
2. How to find HBA WWN via ESXi Shell / CLI:
VMware vSphere ESXi 5.0+:
~ # esxcli storage core adapter list
HBA Name Driver
Link State UID
Description
-------- ------------ ---------- ----------------------------------------------------------------------------------------------vmhba0 megaraid_sas link-n/a unknown.vmhba0
(0:1:0.0) LSI / Symbios
Logic MegaRAID SAS SKINNY Controller
vmhba1 fnic
link-up
fc.20000025b5020110:20000025b502a121 (0:8:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
vmhba2 fnic
link-up
fc.20000025b5020110:20000025b502a120 (0:9:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
VMware ESX/ESXi 2.1.0 4.1.x:
~ # esxcfg-scsidevs -a
vmhba0 megaraid_sas
link-n/a unknown.vmhba0
(0:1:0.0) LSI / Symbios
Logic MegaRAID SAS SKINNY Controller
vmhba1 fnic
link-up fc.20000025b5020110:20000025b502a121 (0:8:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
vmhba2 fnic
link-up fc.20000025b5020110:20000025b502a120 (0:9:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
OR

Connect to ESXi shell either via putty/SSH or DCUI (Direct Console User Interface) /
server console

Run ls /proc/scsi/ and check the folder names:

~ # ls /proc/scsi/

mptsas qla2xxx

Look for a folder like qla2xxx QLogic HBA, lpfc820 Emulex HBA, bnx2i
Brocade HBA;

84 | P a g e

Run ls /proc/scsi/qla2xxx. You will get a list of files, named by a number. Each file
contains information about one HBA;

~ # ls /proc/scsi/qla2xxx/

6 7

Now run cat /proc/scsi/qla2xxx/6 to get full info on the HBA. Alternatively, run the
following commands:
o

Run cat /proc/scsi/qla2xxx/6 | grep -A3 SCSI Device Information:


to get WWNN and WWPNs:

~ # cat /proc/scsi/qla2xxx/6 | grep -A3 'SCSI Device Information:'

SCSI Device Information:

scsi-qla0-adapter-node=20000024ff31f0c8:000000:0;

scsi-qla0-adapter-port=21000024ff31f0c8:000000:0;

Run cat /proc/scsi/qla2xxx/6 | grep Host Device Name to get vmhba


number:

~ # cat /proc/scsi/qla2xxx/6 | grep 'Host Device Name'

Host Device Name vmhba3

3. Powershell script to list host name, vmhba number, HBA model / driver and World Wide
Port Name (WWN):

$scope = Get-VMHost

#$scope = Get-Cluster -Name 'MyCluster' | Get-VMHost # All hosts in a specific cluster

foreach ($esx in $scope){

Write-Host "Host:", $esx

$hbas = Get-VMHostHba -VMHost $esx -Type FibreChannel

foreach ($hba in $hbas){

$wwpn = "{0:x}" -f $hba.PortWorldWideName

Write-Host `t $hba.Device, "|", $hba.model, "|", "World Wide Port Name:" $wwpn

}}

Result:
?

85 | P a g e

# All hosts connected in vCenter

Host: ESXi5-001.vstrong.info

vmhba1 | Cisco VIC FCoE HBA Driver | World Wide Port Name: 20000025b502a101

vmhba2 | Cisco VIC FCoE HBA Driver | World Wide Port Name: 20000025b502a100

what is chap authentication


ESXi supports unidirectional CHAP for all types of iSCSI initiators, and bidirectional CHAP for
software and dependent hardware iSCSI.
Before configuring CHAP, check whether CHAP is enabled at the iSCSI storage system and
check the CHAP authentication method the system supports. If CHAP is enabled, enable it for
your initiators, making sure that the CHAP authentication credentials match the credentials
on the iSCSI storage.
ESXi supports the following CHAP authentication methods:

Unidirectiona
l CHAP

Bidirection
al CHAP

In unidirectional CHAP authentication, the target authenticates the


initiator, but the initiator does not authenticate the target.

In bidirectional CHAP authentication, an additional level of security enables


the initiator to authenticate the target. VMware supports this method for
software and dependent hardware iSCSI adapters only.

For software and dependent hardware iSCSI adapters, you can set unidirectional CHAP and
bidirectional CHAP for each adapter or at the target level. Independent hardware iSCSI
supports CHAP only at the adapter level.
When you set the CHAP parameters, specify a security level for CHAP.
Note
When you specify the CHAP security level, how the storage array responds depends on the
arrays CHAP implementation and is vendor specific. For information on CHAP authentication
behavior in different initiator and target configurations, consult the array documentation.
CHAP Security Level

CHAP Security Level

Description

Supported

None

The host does not use CHAP


authentication. Select this
option to disable
authentication if it is currently
enabled.

Software iSCSI

86 | P a g e

Dependent hardware iSCSI

Independent hardware iSCSI

Use unidirectional CHAP if


required by target

Use unidirectional CHAP


unless prohibited by target

The host prefers a non-CHAP


connection, but can use a
CHAP connection if required
by the target.

Software iSCSI

The host prefers CHAP, but


can use non-CHAP
connections if the target does
not support CHAP.

Software iSCSI

Dependent hardware iSCSI

Dependent hardware iSCSI


Independent hardware iSCSI

Use unidirectional CHAP

The host requires successful


CHAP authentication. The
connection fails if CHAP
negotiation fails.

Software iSCSI
Dependent hardware iSCSI
Independent hardware iSCSI

Use bidirectional CHAP

The host and the target


support bidirectional CHAP.

Software iSCSI
Dependent hardware iSCSI

how to discover LUN ?


There are two methods used to obtain the multipath information from the ESX host:

ESX command line use the command line to obtain the multipath information when
performing troubleshooting procedures.

VMware Infrastructure/vSphere Client use this option when you are performing
system maintenance.

ESXi 5.x / ESXi 5.5


Command line
To obtain LUN multipathing information from the ESXi host command line:
1. Log in to the ESXi host console.
2. Type esxcli storage core path list to get detailed information regarding the paths. For
example:
fc.5001438005685fb7:5001438005685fb6-fc.5006048c536915af:5006048c536915afnaa.60060480000290301014533030303130
UID: fc.5001438005685fb7:5001438005685fb6fc.5006048c536915af:5006048c536915af-

87 | P a g e

naa.60060480000290301014533030303130
Runtime Name: vmhba1:C0:T0:L0
Device: naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk
(naa.60060480000290301014533030303130)
Adapter: vmhba1
Channel: 0
Target: 0
LUN: 0
Plugin: NMP
State: active
Transport: fc
Adapter Identifier: fc.5001438005685fb7:5001438005685fb6
Target Identifier: fc.5006048c536915af:5006048c536915af
Adapter Transport Details: WWNN: 50:01:43:80:05:68:5f:b7 WWPN:
50:01:43:80:05:68:5f:b6
Target Transport Details: WWNN: 50:06:04:8c:53:69:15:af WWPN:
50:06:04:8c:53:69:15:af

3. Type esxcli storage core path list -d <naaID> to list the detailed information of the
corresponding paths for a specific device.

The command esxcli storage nmp device list lists of LUN multipathing information:
naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk
(naa.60060480000290301014533030303130)
Storage Array Type: VMW_SATP_SYMM
Storage Array Type Device Config: SATP VMW_SATP_SYMM does not support device
configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config:
{preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T1:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T1:L0
Notes:

For information on multipathing and path selection options, see Multipathing policies
in ESX/ESXi 4.x and ESXi 5.x (1011340).

If a Connect to localhost failed: Connection failure message is received, the hostd


management agent process may not be running, which is required to use esxcli. In
this situation, you can use localcli instead of esxcli

For more information, see the 5.5 Command Line Reference Guide

vSphere Client
To obtain multipath settings for your storage in vSphere Client:
1. Select an ESX/ESXi host, and click the Configuration tab.
2. Click Storage.

88 | P a g e

3. Select a datastore or mapped LUN.


4. Click Properties.
5. In the Properties dialog, select the desired extent, if necessary.
6. Click Extent Device > Manage Paths and obtain the paths in the Manage
Path dialog.
For information on multipathing options, see Multipathing policies in ESXi 5.x and ESXi/ESX
4.x (1011340).
How to setup maintance mode? and procedure?
First make sure there is no removable media attached to any VMs running on that ESXi host
(e.g. ISO Image mounted from a datastore or shared CD ROM).
Just right click the host and enter maintenance mode, check the box and vMotion will take
care of the rest. You do not have to disable HA and DRS
I have my DRS set to fully automated applying 1 - 4 recommendations. It has been that way
for years. I'm running about 50 guests on 3 ESXi 5.5 Hosts.
On a side note, if you are doing this to patch a system you should remediate it from the
update manager tab. This will pop a wizard that will detach all removable media from
guests if required then put in maintenance mode, install patches, reboot, re-attach, and exit
maintenance mode all on its own.
What is resource pool?
A resource pool is a logical abstraction for flexible management of resources. Resource pools
can be grouped into hierarchies and used to hierarchically partition available CPU and
memory resources.
Each standalone host and each DRS cluster has an (invisible) root resource pool that groups
the resources of that host or cluster. The root resource pool does not appear because the
resources of the host (or cluster) and the root resource pool are always the same.
Users can create child resource pools of the root resource pool or of any user-created child
resource pool. Each child resource pool owns some of the parents resources and can, in
turn, have a hierarchy of child resource pools to represent successively smaller units of
computational capability.
A resource pool can contain child resource pools, virtual machines, or both. You can create a
hierarchy of shared resources. The resource pools at a higher level are called parent
resource pools. Resource pools and virtual machines that are at the same level are called
siblings. The cluster itself represents the root resource pool. If you do not create child
resource pools, only the root resource pools exist.
In the following example, RP-QA is the parent resource pool for RP-QA-UI. RP-Marketing and
RP-QA are siblings. The three virtual machines immediately below RP-Marketing are also
siblings.
Parents, Children, and Siblings in Resource Pool Hierarchy

89 | P a g e

For each resource pool, you specify reservation, limit, shares, and whether the reservation
should be expandable. The resource pool resources are then available to child resource pools
and virtual machines.
Vmware user and admin rights?
How to Create a non-root account with Administrator capabilities on ESX
As per the ESX Server Configuration Guide:
1. To add a user to the Users Table.
a. Log in to the host using the vSphere Client, using the root userid.
b. Click the Local Users & Groups tab and click Users.
c. Right-click anywhere in the Users table and click Add to open the Add New
User dialog.
d. Enter a login name, a user name, and a password.
Note: The vSphere Client automatically assigns the next available UID to the
user on the ESX host. You can over-write the populated field.
e. Create a password that meets the length and complexity requirements.
However, the ESX host checks for password compliance only if you have
switched to the pam_passwdqc.so plug-in for authentication. The password
settings in the default authentication plug-in, pam_cracklib.so, are not
enforced. To allow a user to access the ESX host through a command shell,
select Grant shell access to this user.
f.

In general, do not grant shell access unless the user has a justifiable need.
Users that access the host only through the vSphere Client do not need shell
access.

g. To add the user to a group, select the group name from the Group drop-down
menu and click Add.
h. Click OK
2. To select the Permissions tab, also in the local host vSphere client session, and then:
3.
a. Right click "Add Permissions"
b. select Administrator from the Assigned Role drop-down box

90 | P a g e

c. click Add to bring up a list of available users


d. select the user you added in Step 1 and click Add, then OK
e. click OK
f.

At this point, you should now be able to login to the ESX host using that user,
and the vSphere client.

https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc
%2FGUID-670B9B8C-3810-4790-AC83-57142A9FE16F.html
Esx and ESXi architecture ?
Feature Summary
VMware vSphere 5.5 is the latest release of the flagship virtualization platform from
VMware. VMware vSphere, known in many circles as "ESXi", for the name of the underlying
hypervisor architecture, is a bare-metal hypervisor that installs directly on top of your
physical server and partitions it into multiple virtual machines. Each virtual machine shares
the same physical resources as the other virtual machines and they can all run at the same
time. Unlike other hypervisors, all management functionality of vSphere is possible through
remote management tools. There is no underlying operating system, reducing the install
footprint to less than 150MB.
Improved Reliability and Security
The ESXi bare-metal hypervisors management functionality is in VMkernel, reducing the
footprint to 150 MB. This gives it a very small attack surface for malware and over-thenetwork threats, improving reliability and security.
Streamlined Deployment and Configuration
With few configuration options and simple deployment and configuration, the ESXi
architecture makes it easy to maintain a consistent virtual infrastructure.
Reduced Management Overhead
vSphere ESXi uses an agentless approach to hardware monitoring and system management
with an API-based partner integration model. Management tasks are on remote command
lines with the vSphere Command Line Interface (vCLI) and Power CLI, which uses Windows
PowerShell cmdlets and scripts for automated management.
Simplified Hypervisor Patching and Updating
Fewer patches mean smaller maintenance windows and fewer scheduled maintenance
windows.
What is service console?
Its time for another post in my all-new back to basics series. Thats my term for wiping
down my lab environment and deploying vSphere5.5 and trying to reacquaint myself with
all that vSphere knowledge that was once at my finger tips. This time its the turn of the
DCUI.

91 | P a g e

The Direct Console User Interface (DCUI) is the front-end management system that allows
for some basic configuration changes and troubleshooting options should the VMware ESXi
host become unmanageable via conventional tools such as the vSphere Client or vCenter.
Typical administration tasks include:

Reset root password

Configure Lockdown mode

Configure, Restart, Test and Restore the VMware ESX Management Network

Restart Management Agents

Configure Keyboard

Troubleshoot

View System Logs

Reset System Configuration (Factory Reset)

Shutdown/Restart the VMware ESX Host

Most actions are carried out by using [F2] on the keyboard or [F11] confirm changes, along
with typical options such as [Y] and [N] to various system prompts. Before carrying out any
task you will be required to supply the root password. However, the first law of security is
to secure the physical server so take care to ensure your access to ILO/RAC/BMC interfaces
are properly secured. Although the VMware ESX host can be rebooted from the DCUI this is
regarded as an action of last resort. If the VMware ESX hosts has running VMs these will
crash, and may or may not be restarted on other hosts depending on whether they are part
of a cluster.
http://blogs.vmware.com/smb/2013/12/back-to-basics-managing-vmware-esxi-5-5-directuser-interface-dcui.html
https://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vcli.migration.doc
%2Fcos_upgrade_technote.1.1.html
What is vmkernel ?
How to upgrade the ESX server?
Purpose
This article provides best practice information about installing or upgrading to ESXi 5.5.

92 | P a g e

Notes:

This article assumes that you have read the vSphere Installation and Setup Guide for
ESXi 5.5 Installation or the vSphere Upgrade Guide for ESXi 5.5 upgrades. These
guides contain definitive information. If there is a discrepancy between the guide and
this article, assume that the guide is correct.

vCenter Server is upgraded to version 5.5 before upgrading ESXi to version 5.5

Upgrading to vCenter Server 5.5 best practices (2053132)

Installing vCenter Server 5.5 best practices (2052334)

VMware provides several ways to install or upgrade to ESXi 5.5 hosts. For more
information, see:

Methods of installing ESXi 5.5 (2052439)

Methods of upgrading to ESXi 5.5 (2058352)

Note: These methods include Interactive ESXi Installation, Scripted ESXi Installation,
and Customizing Installations with ESXi Image Builder CLI.
ESXi 5.5 System Requirements
When installing or upgrading to ESXi 5.5, ensure that the host meets these minimum
hardware configurations supported by ESXi 5.5:

1. Your hardware is compliant on the VMware Compatibility Guide. This includes:

System compatibility

I/O compatibility (Network and HBA cards)

Storage compatibility

Backup software compatibility

2. You have a 64-bit processor. VMware ESXi 5.5 only installs and runs on servers with
64-bit x86 CPUs. It also only supports LAHF and SAHF CPU instructions.
3. You have an ESXi 5.5 host machine with at least two cores.
4. The NX/XD bit is enabled for the CPU in the BIOS.

93 | P a g e

5. Your processor is supported. ESXi supports a broad range of x64 multicore


processors. For a complete list of supported processors, see the VMware
Compatibility Guide.
6. You have 4GB RAM. This is the minimum required to install ESXi 5.5. Provide at least
8GB of RAM to take full advantage of ESXi features and run virtual machines in
typical production environments.
7. Support for hardware virtualization (Intel VT-x or AMD RVI) is enabled on x64 CPUs (to
support 64-bit virtual machines). For a complete list of operating systems supported
with ESXi, see the VMware Compatibility Guide. Hosts running virtual machines with
64-bit guest operating systems have these hardware requirements:

For AMD Opteron-based systems, the processors must be Opteron Rev E or


later.

For Intel Xeon-based systems, the processors must include support for Intel
Virtualization Technology (VT). Many servers that include CPUs with VT
support might have VT disabled by default, so you must enable VT manually. If
your CPUs support VT , but you do not see this option in the BIOS, contact
your vendor to request a BIOS version that lets you enable VT support.
Note: To determine whether your server has 64-bit VMware support,
download the CPU Identification Utility from the VMware Website.

8. You have one or more Gigabit or 10GB Ethernet controllers. For a list of supported
network adapter models, see the VMware Compatibility Guide.
9. You have Storage controllers with any combination of one or more of:

Basic SCSI controllers. Adaptec Ultra-160 or Ultra-320, LSI Logic Fusion-MPT,


or most NCR/Symbios SCSI.

RAID controllers. Dell PERC (Adaptec RAID or LSI MegaRAID), HP Smart Array
RAID, or IBM (Adaptec) ServeRAID controllers.

10. You have SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the
virtual machines.
11. For Serial ATA (SATA), a disk connected through supported SAS controllers or
supported on-board SATA controllers. SATA disks are considered to be remote, not
local. These disks are not used as a scratch partition by default because they are
seen as remote.
Note: You cannot connect a SATA CD-ROM device to a virtual machine on an ESXi 5.5
host. To use the SATA CD-ROM device, you must use IDE emulation mode.
12. You are using a supported storage system. ESXi 5.5 supports installing on and
booting from these storage systems:

94 | P a g e

SATA disk drives. SATA disk drives connected behind supported SAS controllers
or supported on-board SATA controllers.

LSI1068E (LSISAS3442E)

LSI1068 (SAS 5)

IBM ServeRAID 8K SAS controller

Smart Array P400/256 controller

Dell PERC 5.0.1 controller

SATA disk drives. Supported on-board SATA include:

Intel ICH9

NVIDIA MCP55

ServerWorks HT1000

Note: ESXi does not support using local, internal SATA drives on the host server to
create VMFS datastores that are shared across multiple ESXi hosts.
13. You have Serial Attached SCSI (SAS) disk drives supported for installing ESXi 5.5 and
for storing virtual machines on VMFS partitions.
14. You have dedicated SAN disk on Fibre Channel or iSCSI.
15. You have USB devices that are supported for installing ESXi .
16. You can install and boot ESXi from an FCoE LUN using VMware software FCoE
adapters and network adapters with FCoE offload capabilities. See the vSphere
Storage documentation for information about installing and booting ESXi with
software FCoE.
ESXi booting requirements
vSphere 5.5 supports booting ESXi hosts from the Unified Extensible Firmware Interface
(UEFI). With UEFI you can boot systems from hard drives, CD-ROM drives, or USB media.
Network booting or provisioning with VMware Auto Deploy requires the legacy BIOS firmware
and is not available with UEFI.
ESXi can boot from a disk larger than 2TB, provided that the system firmware and the
firmware on any add-in card that you are using support it. For more information, see the
vendor documentation.
Note: Changing the boot type from legacy BIOS to UEFI after you install ESXi 5.5 may cause
the host to fail to boot. In this case, the host reports an error similar to:
Not a VMware boot bank. Changing the host boot type between legacy BIOS and UEFI is not
supported after you install ESXi 5.5.
Storage requirements
ESXi 5.5 has these storage requirements:

95 | P a g e

Installing ESXi 5.5 requires a boot device that is minimum 1GB in size. When booting
from a local disk or SAN/iSCSI LUN, a 5.2GB disk is required to allow the creation of
the VMFS volume and a 4GB scratch partition on the boot device. If a smaller disk or
LUN is used, the installer attempts to allocate a scratch region on a separate local
disk. If a local disk cannot be found, the scratch partition (/scratch) is located on the
ESXi host ramdisk, linked to /tmp/scratch. You can reconfigure /scratch to use a
separate disk or LUN. For best performance and memory optimization, VMware
recommends that you do not leave /scratch on the ESXi host ramdisk.

To reconfigure /scratch, see Set the Scratch Partition from the vSphere Client in
the vSphere Installation and Setup Guide.

When installing ESXi onto a USB flash drive or SD flash card, if the drive is less than 8
GB, this prevents the allocation of a scratch partition onto the flash device. VMware
recommends using a retail purchased USB flash drive of 16 GB or larger so that the
"extra" flash cells can prolong the life of the boot media but high quality parts of 4 GB
or larger are sufficient to hold the extended coredump partition.

Due to the I/O sensitivity of USB and SD devices, the installer does not create a
scratch partition on these devices. When installing on USB or SD devices, the installer
attempts to allocate a scratch region on an available local disk or datastore. If no
local disk or datastore is found, /scratch is placed on the ramdisk. You should
reconfigure /scratch to use a persistent datastore following the installation.

In Auto Deploy installations, the installer attempts to allocate a scratch region on an


available local disk or datastore. If no local disk or datastore is found,
the /scratch directory is placed on ramdisk. Reconfigure the /scratch directory to use
a persistent datastore following the installation.

For environments that boot from a SAN or use Auto Deploy, it is not necessary to
allocate a separate LUN for each ESXi host. You can co-locate the scratch regions for
many ESXi hosts onto a single LUN. The number of hosts assigned to any single LUN
should be weighed against the LUN size and the I/O behavior of the virtual machines.

Best practices for upgrading or migrating ESXi hosts


For a successful upgrade or migration, follow these best practices:
1. If your vSphere system includes VMware solutions or plug-ins, ensure that they are
compatible with the vCenter Server version that you are upgrading to. For more
information, see the VMware Product Interoperability Matrix.
2. Read Preparing to Upgrade Hosts in the vSphere Upgrade Guide to understand the
changes in configuration and partitioning between ESX/ESXi 4.x and ESXi 5.x, the
upgrade and migration scenarios that are supported, and the options and tools
available to perform the upgrade or migration.
3. Read the VMware vSphere Release Notes for known installation issues.
4. If your vSphere installation is in a VMware View environment, see Upgrading vSphere
Components Separately in a Horizon View Environment in the vSphere Upgrade
Guide.
To prepare your system for the upgrade:

96 | P a g e

1. Ensure that your current ESX/ESXi version is supported for migration or upgrade. For
more information, see Supported Upgrades to ESXi 5.5 in the vSphere Upgrade
Guide.
2. Ensure that your system hardware complies with above ESXi requirements. For more
information, see the System Requirementssection in the vSphere Upgrade Guide and
the VMware Compatibility Guide. Check for system compatibility, I/O compatibility
(network and HBA cards), storage compatibility, and backup software compatibility.
3. Ensure that sufficient disk space is available on the host for the upgrade or migration.
Migrating from ESX 4.x to ESXi 5.x requires 50MB of free space on your VMFS
datastore.
4. If a SAN is connected to the host, detach the fibre before continuing with the upgrade
or migration. Do not disable HBA cards in the BIOS.
Note: This step does not apply to ESX hosts that boot from the SAN and have the
Service Console on the SAN LUNs. You can disconnect LUNs that contain the VMFS
datastore and do not contain the Service Console.

VMware strongly recommends that you back up your host before performing an
upgrade or migration, so that, if the upgrade fails, you can restore your host.
Important: After upgrading or migrating your host to ESXi 5.x, you cannot roll back
to the earlier version.

Depending on the upgrade or migration method you choose, you may have to
migrate or power off all virtual machines on the host.

After the upgrade or migration, test the system to ensure that the upgrade or
migration completed successfully.

Reapply your host licenses. For more information, see the Reapplying Licenses After
Upgrading to ESXi 5.5 section in the vSphereUpgrade Guide.

Consider setting up a syslog server for remote logging, to ensure sufficient disk
storage for log files. Setting up logging on a remote host is especially important for
hosts with limited local storage. Optionally, you can install the vSphere Syslog
Collector to collect logs from all hosts. See Providing Sufficient Space for System
Logging. For information about setting up and configuring syslog and a syslog server,
setting up syslog from the host profiles interface, and installing vSphere Syslog
Collector, see the vSphere Installation and Setup Guide.

If the upgrade or migration was unsuccessful, you can restore your host if you have a
valid backup.

Difference between patches and upgrades?


What is EVC ?
Enhanced vMotion Compatibility (EVC) simplifies vMotion compatibility issues across CPU
generations. EVC automatically configures server CPUs with Intel FlexMigration or AMD-V
Extended Migration technologies to be compatible with older servers.
After EVC is enabled for a cluster in the vCenter Server inventory, all hosts in that cluster are
configured to present identical CPU features and ensure CPU compatibility for vMotion. The
features presented by each host are determined by selecting a predefined EVC baseline.

97 | P a g e

vCenter Server does not permit the addition of hosts that cannot be automatically
configured to be compatible with the EVC baseline.
Table 1.1: Description of Intel EVC Baselines
EVC EVC Baseline
Leve
l

Description

L0

Intel "Merom"
Applies baseline feature set of Intel "Merom" Generation (Intel
Gen. (Intel Xeon Xeon Core 2) processors to all hosts in the cluster.
Core 2)

L1

Intel "Penryn"
Gen. (formerly
Intel Xeon 45nm
Core 2)

Applies baseline feature set of Intel "Penryn" Generation (Intel


Xeon 45nm Core 2) processors to all hosts in the cluster.
Compared to the Intel "Merom" Generation EVC mode, this EVC
mode exposes additional CPU features including SSE4.1.

L2

Intel "Nehalem"
Gen. (formerly
Intel Xeon
Core i7)

Applies baseline feature set of Intel "Nehalem" Generation


(Intel Xeon Core i7) processors to all hosts in the cluster.
Compared to the Intel "Penryn" Generation EVC mode, this EVC
mode exposes additional CPU features including SSE4.2 and
POPCOUNT.

L3

Intel "Westmere"
Gen. (formerly
Intel Xeon 32nm
Core i7)

Applies baseline feature set of Intel "Westmere" Generation


(Intel Xeon 32nm Core i7) processors to all hosts in the
cluster. Compared to the Intel "Nehalem" Generation mode, this
EVC mode exposes additional CPU features including AES and
PCLMULQDQ.
Note: Intel i3/i5 Xeon Clarkdale Series processors that do not
support AESNI and PCLMULQDQ cannot be admitted to EVC
modes higher than the Intel "Nehalem" Generation mode.
Note: Intel Atom C2300-C2700 processors support the Intel
"Westmere" Gen. EVC baseline although their architecture is
different from the architecture of the Intel "Westmere"
Generation processors.

L4

Intel "Sandy
Bridge" Generation

Applies baseline feature set of Intel "Sandy Bridge" Generation


processors to all hosts in the cluster. Compared to the Intel
"Westmere" Generation mode, this EVC mode exposes additional
CPU features including AVX and XSAVE.
Note: Intel "Sandy Bridge" processors that do not support
AESNI and PCLMULQDQ cannot be admitted to EVC modes higher
than the Intel "Nehalem" Generation mode.

L5

Intel "Ivy Bridge"


Generation

Applies baseline feature set of Intel "Ivy Bridge" Generation


processors to all hosts in the cluster. Compared to the Intel
"Sandy Birdge" Generation EVC mode, this EVC mode exposes
additional CPU features including RDRAND, ENFSTRG, FSGSBASE,
SMEP, and F16C.
Note: Some Intel "Ivy Bridge" processors do not provide the full
"Ivy Bridge" feature set. Such processors cannot be admitted to
EVC modes higher than the Intel "Nehalem" Generation mode.

Note: In vCenter Server 5.1 and 5.5, the Intel "Ivy Bridge" Generation option is only
displayed in the Web Client.
Table 1.2: Description of AMD EVC Baselines
EVC EVC Baseline
Leve

98 | P a g e

Description

l
A0

AMD Opteron
Generation 1

Applies baseline feature set of AMD Opteron Generation 1 (Rev. E)


processors to all hosts in the cluster.

A1

AMD Opteron
Generation 2

Applies baseline feature set of AMD Opteron Generation 2 (Rev. F)


processors to all hosts in the cluster.
Compared to the AMD Opteron Generation 1 EVC mode, this EVC
mode exposes additional CPU features including CPMXCHG16B and
RDTSCP.

A3

AMD Opteron
Generation 3

Applies baseline feature set of AMD Opteron Generation 3


(Greyhound) processors to all hosts in the cluster.
Compared to the AMD Opteron Generation 2 EVC mode, this EVC
mode exposes additional CPU features including SSE4A,
MisAlignSSE, POPCOUNT and ABM (LZCNT).
Note: Due to 3DNow! support being removed from AMD
processors after mid 2010, use AMD Opteron Generation 3 (no
3DNow!) when possible to avoid compatibility issues with future
processor generations.

A2,
B0

AMD Opteron
Generation 3 (no
3DNow!)

Applies baseline feature set of AMD Opteron Generation 3


(Greyhound) processors, with 3DNow! support removed, to all
hosts in the cluster.
This mode allows you to prepare clusters containing AMD hosts to
accept AMD processors without 3DNow! support.

B1

AMD Opteron
Generation 4

Applies baseline feature set of AMD Opteron Generation 4


(Bulldozer) processors to all hosts in the cluster.
Compared to the AMD Opteron Generation 3 (no 3DNow!) EVC
mode, this EVC mode exposes additional CPU features including
SSSE3, SSE4.1, AES, AVX, XSAVE, XOP, and FMA4.

B2

AMD Opteron
"Piledriver"
Generation

Applies baseline feature set of AMD Opteron "Piledriver"


Generation processors to all hosts in the cluster.
Compared to the AMD Opteron Generation 4 EVC mode, this EVC
mode exposes additional CPU features including FMA, TBM, BMI1,
and F16C.

Note: In vCenter Server 5.1 and 5.5, the AMD Opteron "Piledriver" Generation option is
only displayed in the Web Client.
It is often the case that an older release of vSphere supports a new processor but not the
corresponding new EVC baseline that exposes the maximum guest-visible features of that
processor. A newer vSphere release usually supports both the processor and the new EVC
baseline. This is because the older release can only support those features of the new
processor that are in common with older processors. Therefore, support of an EVC baseline is
not identical to the support of the corresponding processor. Tables 2.1 and 2.2 indicate the
earliest vSphere release that supports each EVC baseline.
As an example, consider the Intel Sandy Bridge Generation EVC baseline and the Intel
Xeon e5-2400 (a processor based on the Intel Sandy Bridge architecture). The processor
is supported by both vSphere 4.1 Update 2 (and later) and vSphere 5.0 (and later). But
because vSphere 4.1 update 2 lacks support for advanced Sandy Bridge features such as
AVX, the Intel Sandy Bridge Generation EVC baseline is only supported starting with the
vSphere 5.0 release. However, vSphere 4.1 Update 2 does support lower level EVC baselines
on the Intel Xeon e5-2400, such as Intel Westmere Generation and Intel Merom
Generation.
Not all members of a given processor generation can support the same maximum EVC
baseline. Either because of BIOS configuration or branding decisions made by OEM or CPU
vendors, some members of that generation may lack a feature required to participate at the
maximum EVC baseline. For example, some Intel Xeon i3/i5 Clarkdale processors (based
on the Intel Westmere processor architecture) do not have AESNI capability, which is

99 | P a g e

required for the Intel Westmere Generation EVC baseline. Therefore, these processors
cannot support that EVC baseline and must use lower levels of EVC baselines. Another
example is where AESNI has been disabled by BIOS in an Intel Xeon 5600 processor (also
based on the Intel Westmere processor architecture); as a result, this processor also
cannot support the Intel Westmere EVC baseline and must use lower levels of EVC
baselines.
The VMware Compatibility Guide always correctly lists the maximum EVC baseline for a
processor assuming that no BIOS disablement of features has been enforced. Since disabling
of features by BIOS is OEM and customer specific, the guide cannot address these cases.
Table 2.1: AMD EVC Baselines supported in vCenter Server releases
EVC Cluster Baseline
AMD
AMD
AMD
AMD
vCenter Server
Opteron
Opteron Opteron Opteron
Release
Gen. 3 (no
Gen. 1
Gen. 2
Gen. 3
3DNow!)

AMD
AMD Opteron
Opteron "Piledriver"
Gen. 4
Gen.

VirtualCenter 2.5
U2 and later
updates

Yes

No

No

No

No

No

vCenter Server
4.0

Yes

Yes

Yes

No

No

No

vCenter Server
4.0 U1 and later
updates

Yes

Yes

Yes

No

No

No

vCenter Server
4.1

Yes

Yes

Yes

Yes

No

No

vCenter Server
5.0

Yes

Yes

Yes

Yes

Yes

No

vCenter Server
5.1

Yes

Yes

Yes

Yes

Yes

Yes

vCenter Server
5.5

Yes

Yes

Yes

Yes

Yes

Yes

Table 2.2: Intel EVC Baselines supported in vCenter Server releases


EVC Cluster Baseline
Intel
Intel
Intel
Intel
Intel
vCenter Server
"Sandy
"Merom"
"Penryn"
"Nehalem" "Westmere"
Release
Bridge"
Generation Generation Generation Generation
Generation

Intel "Ivy
Bridge"
Generation

VirtualCenter
2.5 U2 and
later updates

Yes

No

No

No

No

No

vCenter Server
4.0

Yes

Yes

Yes

No

No

No

vCenter Server
4.0 U1 and
later updates

Yes

Yes

Yes

Yes

No

No

vCenter Server
4.1

Yes

Yes

Yes

Yes

No

No

vCenter Server
5.0

Yes

Yes

Yes

Yes

Yes

No

100 | P a g e

vCenter Server
5.1

Yes

Yes

Yes

Yes

Yes

Yes

vCenter Server
5.5

Yes

Yes

Yes

Yes

Yes

Yes

How to export syslog to vmware customer care?


http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=2003322
What backup mechanism ?XXXXXXXXXXXXX
Vmware backup types?( agents)XXXXXXXXX
What is template?
Difference between template and clone?
Difference between hot clone and cold clone?
How is VMware Consolidated Backup Used?
VMware Consolidated Backup enables:

Full and incremental file backup of virtual machines for recovery of individual files
and directories

Full image backup of virtual machines

How to config VCB( vmware consolidated backup)- port?- how it works?


VMware Consolidated Backup enables off-host backup of virtual machines running any
supported operating system from a centralized backup proxy server using backup software
from VMware partners that is already in the environment.
1. A backup job is created for each virtual machine and that job is dispatched to the backup
proxy server.
2. VMware Consolidated Backup takes a virtual machine snapshot and mounts the snapshot
to the backup proxy server. As part of this process, the virtual machine is quiesced to ensure
that the entire state of the virtual machine is captured at the point in time the snapshot is
created.
3. The third-party backup agent, already in place on the backup proxy server, then backs up
the contents of the virtual machine as a virtual disk image or as a set of files and directories
depending on the operating system.
4. Finally, VMware Consolidated Backup tears down the mount and takes the virtual disk out
of snapshot mode.
Key Features of VMware Consolidated Backup

101 | P a g e

Integration with leading backup products.


Leverage existing investment in backup agents to move virtual machine data from the
backup proxy server to tape or disk. VMware Consolidated Backup is supported with backup
software from leading backup vendors.
Backup proxy server.
Remove load from VMware ESX hosts by using an off-host backup server. In addition to
running as a standalone server, the backup proxy server can be run inside a virtual machine
for enhanced flexibility.
Image level backup.
Back up and recover entire virtual machine images for virtual machines running any
operating system.
File level full and incremental backup.
Recover individual files and directories with file level full and incremental backups for virtual
machines running supported Windows operating systems.
Broad storage support.
Support for backup of virtual machines residing on Fibre Channel and iSCSI SAN, NAS or local
storage.
Support for Microsoft Volume Snapshot Copy Service (VSS).
Consolidated Backup supports use of VSS for application quiescing on supported platforms
when taking snapshots of virtual machines for backup.
Optimized I/O path for backup.
VMware Consolidated Backup uses a hot add transport mode that delivers optimized
performance for backups done using a virtual machine as a backup proxy server.
How to install new VM step by step ?
Software iSCSI Adapter
A software iSCSI adapter is a VMware code built into the VMkernel. It allows your host to
connect to the iSCSI storage device through standard network adapters. The software iSCSI
adapter handles iSCSI processing while communicating with the network adapter. With the
software iSCSI adapter, you can use iSCSI technology without purchasing specialized
hardware.
Hardware iSCSI Adapter
A hardware iSCSI adapter is a third-party adapter that offloads iSCSI and network processing
from your host. Hardware iSCSI adapters are divided into categories.
Dependent
Hardware

102 | P a g e

Depends on VMware networking, and iSCSI configuration and management

iSCSI
Adapter

interfaces provided by VMware.


This type of adapter can be a card that presents a standard network adapter
and iSCSI offload functionality for the same port. The iSCSI offload
functionality depends on the host's network configuration to obtain the IP,
MAC, and other parameters used for iSCSI sessions. An example of a
dependent adapter is the iSCSI licensed Broadcom 5709 NIC.

Independent Implements its own networking and iSCSI configuration and management
Hardware
interfaces.
iSCSI Adapter
An example of an independent hardware iSCSI adapter is a card that either
presents only iSCSI offload functionality or iSCSI offload functionality and
standard NIC functionality. The iSCSI offload functionality has independent
configuration management that assigns the IP, MAC, and other parameters
used for the iSCSI sessions. An example of a independent adapter is the
QLogic QLA4052 adapter.
Hardware iSCSI adapters might need to be licensed. Otherwise, they will not appear in the
vSphere Client or vSphere CLI. Contact your vendor for licensing information.

103 | P a g e

You might also like