Professional Documents
Culture Documents
It works directly on the hardware of the host and can monitor operating systems that
run above the hypervisor.
The hypervisor is small as its main task is sharing and managing hardware resources
between different operating systems.
A major advantage is that any problems in one virtual machine or guest operating
system do not affect the other guest operating systems running on the hypervisor.
1 | Page
Type 2 Hypervisor
In this case, the hypervisor is installed on an operating system and then supports
other operating systems above it.
While having a base operating system allows better specification of policies, any
problems in the base operating system affects the entire system as well even if the
hypervisor running above the base OS is secure.
Products
10
Workstation 6.0.x
3 and 4
ESX vs ESXi
ESXi
ESXi
ESXi
ESXi
VMware ESX and ESXi are both bare metal hypervisor architectures that install directly on
the server hardware.
2 | Page
Here I have summarized the comparison with a list of selected features between vSphere ESXi
5.0, 5.1 and5.5. I have skipped some of the features for detailed overview on all the comparison
factors please visit vmware's official website.
ESXi Virtual Machine Maximum
Items
Maximum
vSphere 5.0
vSphere 5.1
vSphere 5.5
32
64
64
1 TB
1 TB
1 TB
60
60
60
60
60
60
10
10
10
40
40
40
2 TB
2 TB
62 TB
128 MB
128 MB
512 MB
Maximum
vSphere 5.0
3 | Page
vSphere 5.1
vSphere 5.5
160
160
320
512
512
512
2048
2048
4096
25
25
32
FT Virtual Disks
16
16
16
64
64
64
Memory Maximum
Items
Maximum
vSphere 5.0
vSphere 5.1
vSphere 5.5
2 TB
2 TB
4 TB
1 TB
1 TB
NA
Storage Maximum
Items
Maximum
vSphere 5.0
vSphere 5.1
vSphere 5.5
2048
2048
2048
256
256
256
1024
1024
1024
4 | Page
256
256
256
256
256
256
LUN size
NA
64 TB
64 TB
32
32
32
1024
1024
1024
HBA ports
16
16
16
256
256
256
128
128
128
Items
Maximum
vSphere 5.0
vSphere 5.1
vSphere 5.5
32
32
32
3000
4000
4000
512
512
512
1600
1600
1600
1024
1024
1024
1600
1600
1600
Items
5 | Page
Maximum
vSphere 5.0
vSphere 5.1
vSphere 5.5
1000
1000
1000
10000
10000
10000
1500
1500
1500
10
10
10
3000
3000
3000
100
100
100
500
500
500
50000
50000
50000
Storage DRS
Items
Maximum
vSphere 5.0
vSphere 5.1
vSphere 5.5
9000
9000
9000
32
32
32
256
256
256
6 | Page
*.vmdk file This isn't the file containing the raw data anymore. Instead it is the disk
descriptor file which describes the size and geometry of the virtual disk file. This file is in
text format and contains the name of the flat.vmdk file for which it is associated with and
also the hard drive adapter type, drive sectors, heads and cylinders, etc. One of these files
will exist for each virtual hard drive that is assigned to your virtual machine. You can tell
which flat.vmdk file it is associated with by opening the file and looking at the Extent
Description field.
*delta.vmdk file - This is the differential file created when you take a snapshot of a VM
(also known as REDO log). When you snapshot a VM it stops writing to the base vmdk and
starts writing changes to the snapshot delta file. The snapshot delta will initially be small
and then start growing as changes are made to the base vmdk file, The delta file is a bitmap
of the changes to the base vmdk thus is can never grow larger than the base vmdk. A delta
file will be created for each snapshot that you create for a VM. These files are automatically
deleted when the snapshot is deleted or reverted in snapshot manager.
*.vmx file This file is the primary configuration file for a virtual machine. When you create
a new virtual machine and configure the hardware settings for it that information is stored in
this file. This file is in text format and contains entries for the hard disk, network adapters,
memory, CPU, ports, power options, etc. You can either edit these files directly if you know
what to add or use the Vmware GUI (Edit Settings on the VM) which will automatically
update the file.
*.vswp file This is the VM swap file (earlier ESX versions had a per host swap file) and is
created to allow for memory overcommitment on a ESX server. The file is created when a VM
is powered on and deleted when it is powered off. By default when you create a VM the
memory reservation is set to zero, meaning no memory is reserved for the VM and it can
potentially be 100% overcommitted. As a result of this a vswp file is created equal to the
amount of memory that the VM is assigned minus the memory reservation that is configured
for the VM. So a VM that is configured with 2GB of memory will create a 2GB vswp file when
it is powered on, if you set a memory reservation for 1GB, then it will only create a 1GB vswp
file. If you specify a 2GB reservation then it creates a 0 byte file that it does not use. When
you do specify a memory reservation then physical RAM from the host will be reserved for
the VM and not usable by any other VMs on that host. A VM will not use it vswp file as long
as physical RAM is available on the host. Once all physical RAM is used on the host by all its
VMs and it becomes overcommitted then VMs start to use their vswp files instead of
physical memory. Since the vswp file is a disk file it will effect the performance of the VM
when this happens. If you specify a reservation and the host does not have enough physical
RAM when the VM is powered on then the VM will not start.
*.vmss file This file is created when a VM is put into Suspend (pause) mode and is used to
save the suspend state. It is basically a copy of the VMs RAM and will be a few megabytes
larger than the maximum RAM memory allocated to the VM. If you delete this file while the
VM is in a suspend state It will start the VM from a normal boot up instead of starting the vm
from the state it was when it was suspended. This file is not automatically deleted when the
VM is brought out of Suspend mode. Like the Vswp file this file will only be deleted when the
VM is powered off (not rebooted). If a Vmss file exists from a previous suspend and the VM is
suspended again then the previous file is re-used for the subsequent suspensions. Also note
that if a vswp file is present it is deleted when a VM is suspended and then re-created when
7 | Page
the VM is powered on again. The reason for this is that the VM is essentially powered off in
the suspend state, its RAM contents are just preserved in the vmss file so it can be quickly
powered back on.
*.log file This is the file that keeps a log of the virtual machine activity and is useful in
troubleshooting virtual machine problems. Every time a VM is powered off and then back on
a new log file is created. The current log file for the VM is always vmware.log. The older log
files are incremented with a -# in the filename and up to 6 of them will be retained. (ie.
vmware-4.log) The older .log files are always deleteable at will, the latest .log file can be
deleted when the VM is powered off. As the log files do not take much disk space, most
administrators let them be.
*.vmxf file This is a supplemental configuration file in text format for virtual machines
that are in a team. Note that the .vmxf file remains if a virtual machine is removed from the
team. Teaming virtual machines is a Vmware Workstation feature and includes the ability to
designate multiple virtual machines as a team, which administrators can then power on and
off, suspend and resume as a single object making it particularly useful for testing clientserver environments. This file still exists with ESX server virtual machines but only for
compatibility purposes with Workstation.
*.vmsd file This file is used to store metadata and information about snapshots. This file is
in text format and will contain information such as the snapshot display name, uid, disk file
name, etc. It is initially a 0 byte file until you create your first snapshot of a VM and from
that point it will populate the file and continue to update it whenever new snapshots are
taken. This file does not cleanup completely after snapshots are taken. Once you delete a
snapshot it will still leave the fields in the file for each snapshot and just increment the uid
and set the name to Consolidate Helper presumably to be used with Consolidated Backups
*.vmsn file - This is the snapshot state file, which stores the exact running state of a virtual
machine at the time you take that snapshot. This file will either be small or large depending
on if you select to preserve the VMs memory as part of the snapshot. If you do choose to
preserve the VMs memory then this file will be a view megabytes larger then the maximum
RAM memory allocated to the VM. This file is similar to the vmss (Suspend) file. A vmsn file
will be created for each snapshot taken on the VM, these files are automatically deleted
when the snapshot is removed.
Snapshot Files
When you take a snapshot, you capture the state of the virtual machine settings and the
virtual disk. If you are taking a memory snapshot, you also capture the memory state of the
virtual machine. These states are saved to files that reside with the virtual machine's base
files.
Snapshot Files
A snapshot consists of files that are stored on a supported storage device. A Take Snapshot
operation creates .vmdk, -delta.vmdk, .vmsd, and .vmsnfiles. By default, the first and all
delta disks are stored with the base .vmdk file. The .vmsd and .vmsn files are stored in the
virtual machine directory.
8 | Page
Delta
disk
files
A .vmdk file to which the guest operating system can write. The delta disk
represents the difference between the current state of the virtual disk and the
state that existed at the time that the previous snapshot was taken. When you
take a snapshot, the state of the virtual disk is preserved, which prevents the
guest operating system from writing to it, and a delta or child disk is created.
A delta disk has two files, including a descriptor file that is small and contains
information about the virtual disk, such as geometry and child-parent
relationship information, and a corresponding file that contains the raw data.
Note
If you are looking at a datastore with the Datastore Browser in the vSphere
Client, you see only one entry to represent both files.
The files that make up the delta disk are referred to as child disks or redo logs. A
child disk is a sparse disk. Sparse disks use the copy-on-write mechanism, in
which the virtual disk contains no data in places, until copied there by a write
operation. This optimization saves storage space. A grain is the unit of measure
in which the sparse disk uses the copy-on-write mechanism. Each grain is a
block of sectors that contain virtual disk data. The default size is 128 sectors or
64KB.
Flat file
A -flat.vmdk file that is one of two files that comprises the base disk. The flat
disk contains the raw data for the base disk. This file does not appear as a
separate file in the Datastore Browser.
Databa
se file
A .vmsd file that contains the virtual machine's snapshot information and is the
primary source of information for the Snapshot Manager. This file contains line
entries, which define the relationships between snapshots and between child
disks for each snapshot.
Memor
y file
A .vmsn file that includes the active state of the virtual machine. Capturing the
memory state of the virtual machine lets you revert to a turned on virtual
machine state. With nonmemory snapshots, you can only revert to a turned off
virtual machine state. Memory snapshots take longer to create than nonmemory
snapshots. The time the ESX host takes to write the memory onto the disk is
relative to the amount of memory the virtual machine is configured to use.
A Take Snapshot operation creates .vmdk, -delta.vmdk, vmsd, and vmsn files.
File
Description
9 | Page
vmname.vmsd
vmname.Snapshotnumber.vmsn
10 | P a g e
vSphere App HA
MSCS Updates
Traffic Filtering
11 | P a g e
12 | P a g e
machine anti affinity rule. So now before attempting to move any vm during a resource
outage this rule plays its part and migrates the vm accordingly.
VMware vSphere Data Protection Enhancements
New enhanced features has been added to VMware vpshere Data Protection which is a
backup and recovery solution for VMware virtual machines. The following enhancements
were made
Backup
and
restore
of
individual
virtual
machine
hard
disks
(.vmdk
files): Individual .vmdk files can be selected for backup and restore operations.
Scheduling granularity
Automatically scales
13 | P a g e
In vSphere 5.5, VMware supports the following features related to Microsoft Cluster Service
(MSCS):
Traffic Filtering
Traffic filtering is the ability to filter packets based on the various parameters of the packet
header. This capability is also referred to as access control lists (ACLs), and it is used to
provide port-level security.
Quality of Service tagging
QOS is responsible for differentiating traffic importance and helps reserving bandwidth
accordingly. VMware has supported 802.1p tagging on VDS since vSphere 5.1. The 802.1p
tag is inserted in the Ethernet header before the packet is sent out on the physical network.
In vSphere 5.5, the DSCP marking support enables users to insert tags in the IP header. IP
headerlevel tagging helps in layer 3 environments, where physical routers function better
with an IP header tag than with an Ethernet header tag.
40GB NIC Support
14 | P a g e
Support for 40GB NICs on the vSphere platform enables users to take advantage of higher
bandwidth pipes to the servers.
Summary
So this was a brief article on few of the enhancements and new features added in vSphere
5.5 as compared to earlier versions. To summarize this article I have separated the points in
two sub sections as shown below
New features in vSphere 5.5
vSphere App HA
MSCS updates
PDL AutoRemove
Traffic filtering
15 | P a g e
vSphere 5.1
vSphere 5.5
160
320
2 TB
4 TB
16
2048
4096
VMDK Size
2TB
62 TB
2TB
62 TB
10
No
yes
32 GB
unlimited
Yes
APP HA
No
Yes
No
Yes
No
Yes
only NVIDIA
No
Yes
No
No
Yes
No
Yes
VM Hardware Version
40 GBps physical Adapter support
ESXi Free version RAM limit
ESXi Free version maximum vSMP
16 | P a g e
No
Yes
Host power
management
leveraged only the
performance state (Pstate), which kept the
processor running at a
lower frequency and
voltage
No
Yes
No
Yes
No
Supports up to 64
vSphere Replication
kept
only the most recent
copy
of a virtual machine
17 | P a g e
18 | P a g e
Clone
Clone creates an exact copy of a running
Virtual Machine at the time of cloning
process
19 | P a g e
Template
Template acts as a baseline image with the
predefined configuration as per organization
standards
20 | P a g e
The state includes the virtual machines power state (for example, powered-on,
powered-off, suspended).
The data includes all of the files that make up the virtual machine. This includes
disks, memory, and other devices, such as virtual network interface cards.
How to create a new VM template on VMware vSphere ?
A VM template is a master copy of the virtual machine which can be used to create a new
virtual machines in a few clicks. Normally template will be used to create a similar type of
machines. For an example, to build a web-server on Redhat Linux,
1.
2.
3.
4.
Install Apache.
You have to setup the things only for the first VM if you are going to use template. Using that
newly created VM, You can create a template which will be act as master copy for future
provisioning. So the bottom line is that template has the operating system installed virtual
machine and set of installed applications on that VM.
We can create a new virtual machine template from existing virtual machine or you can
covert the virtual machine as template. Here we will see how we can create a new template
from existing VM.
1. Login to vSphere Client and select the VM from which you want to generate new template.
21 | P a g e
Select the VM
2.Right click the VM and Select clone to template. If you select convert to template, VM
will be converted as template permanently.
22 | P a g e
23 | P a g e
VM template clone
8. Once the clone is completed, click on the VM & Template tab. Here you can see the
template details which you have created by cloning the existing VM.
Template Details
Using the VM template, you can create new virtual machines in few clicks. But configuring
the IP address, setting the unique host name and configuring the application needs to be
done after creating the new VM manually.
What is VMware vMotion and what are its requirements?
VMware VMotion enables the live migration of running virtual machines from one physical
server to another with zero downtime.
VMotion lets you:
Automatically optimize and allocate entire pools of resources for maximum hardware
utilization and
Availability.
24 | P a g e
vMotion migrates the vm from one host to another which is only possible with both
the host are sharing a common storage or to any storage accessible by both the
source and target hosts.
A shared storage can be on a Fibre Channel storage area network (SAN), or can be
implemented using iSCSI SAN and NAS.
If you use vMotion to migrate virtual machines with raw device mapping (RDM) files,
make sure to maintain consistent LUN IDs for RDMs across all participating hosts.
Use at least one 10 GigE adapter if you migrate workloads that have many memory
operations.
Ensure that jumbo frames are enabled on all network devices that are on the vMotion
path including physical NICs, physical switches and virtual switches.
VLAN Tagging in ESX (VST,EST & VGT)
25 | P a g e
ESX/ESXI host
1.3 Port groups on the Virtual switch of ESX server should be configured with VLAN ID (14094)
1.4 vSwitch responsibilities is to strip off the vlan tag and send packet to virtual machine
in corresponding port group.
1.5 Reduces the number of Physical nics on the server by running all the VLANs over one
physical nic. Better solution would be keeping 2 nics for redundancy.
1.6 Reduces number of cables from ESX server to physical switch.
1.7 The physical switch port connecting the uplink from the ESX should be configured as
Trunk port.
1.8 virtual machine network Packet is delivered to vSwitch and before it is sent to
physical switch the packet is tagged with vlan id according to the port group membership
of originating virtual machine.
26 | P a g e
Below is comparison table for the people want a comparison under single table
27 | P a g e
Creates a virtual disk in a default thick format. Space required for the virtual
disk is allocated when the virtual disk is created. Data remaining on the
physical device is not erased during creation, but is zeroed out on demand at
a later time on first write from the virtual machine.
Using the default flat virtual disk format does not zero out or eliminate the
possibility of recovering deleted files or restoring old data that might be
present on this allocated space. You cannot convert a flat disk to a thin disk.
Thick
Provision
Eager
Zeroed
A type of thick virtual disk that supports clustering features such as Fault
Tolerance. Space required for the virtual disk is allocated at creation time. In
contrast to the flat format, the data remaining on the physical device is
zeroed out when the virtual disk is created. It might take much longer to
create disks in this format than to create other types of disks.
Thin
Provision
Use this format to save storage space. For the thin disk, you provision as
much datastore space as the disk would require based on the value that you
enter for the disk size. However, the thin disk starts small and at first, uses
only as much datastore space as the disk needs for its initial
What is HA?
VMware HA i.e. High Availability which works on the host level and is configured on
the Cluster.
A Cluster configured with HA will migrate and restart all the vms running under any of
the host in case of any host-level failure automatically to another host under the
same cluster.
VMware HA continuously monitors all ESX Server hosts in a cluster and detects
failures.
VMware HA agent placed on each host maintains a heartbeat with the other hosts in
the cluster using the service console network. Each server sends heartbeats to the
others servers in the cluster at five-second intervals. If any servers lose heartbeat
over three consecutive heartbeat intervals, VMware HA initiates the failover action of
restarting all affected virtual machines on other hosts.
You can set virtual machine restart priority in case of any host failure depending upon
the critical nature of the vm.
NOTE: Using HA in case of any host failure with RESTART the vms on different host so the
vms state will be interrupted and it is not a live migration
How HA works?
how to setup HA for a VM?
28 | P a g e
29 | P a g e
vCenter Server options (VC) -- these options are configured at the vCenter Server
level and apply to all HA clusters unless overridden by cluster-specific options in
cases where such options exist. If the vCenter Server options are configured using
the vCenter Server options manager, a vCenter Server restart may not be required -see the specific options for details. But if these options are configured by adding the
option string to the vpxd.cfg file (as a child of the config/vpxd/das tag), a restart is
required.
Cluster options (cluster) -- these options are configured for an individual cluster and if
they impact the behavior of the HA Agent (FDM), they apply to all instances of FDM in
that cluster. These options are configured by using the HA cluster-level advanced
options mechanism, either via the UI or the API. Options with names starting with
"das.config." can also be applied using the "fdm options" mechanism below, but this
is not recommended because the options should be equally applied to all FDM
instances.
fdm options (fdm) -- these options are configured for an individual FDM instance on a
host. They are configured by adding the option to
the /etc/opt/vmware/fdm/fdm.cfg file of the host as a child of the config/fdm tag.
Options set in this way are lost when fdm is uninstalled (for example if the host is
removed from vCenter Server and then re-added) or if the host is managed by Auto
Deploy and is rebooted.
Common Options
Versi
on
Name
Description
Type
of
Reconfigurat Opti
ion
on
Cluster Configuration
Allows you to specify the
specific management
networks used by HA, where X
is a number between 0 and 9.
For example if you set a value
to Management Network,
only the networks associated
with port groups having this
name are used. Ensure that all
hosts are configured with the
named port group and the
networks are compatible. In
5.5, this option is ignored if
vSAN is enabled for the
cluster.
5.0,
5.1,
5.5
das.allowNetworkX
5.0,
5.1,
5.5
30 | P a g e
Yes.
Reconfigure
HA on all hosts
to have the
specification
Clust
take effect.
er
Yes.
Reconfigure
HA on a host
to have the
config issue
for that host
cleared.
Clust
er
HA chooses by default 2
heartbeat datastores for each
host in an HA cluster. This
option can be used to increase
the number to a value in the
range of 2 to 5 inclusive.
Yes.
Reconfigure
HA on all hosts Clust
in the cluster. er
5.0,
5.1,
5.5
das.heartbeatDsPerHost
5.0,
5.1,
5.5
5.0,
5.1,
5.5
Yes.
Reconfigure
HA on all hosts Clust
in the cluster. er
Clust
er
Admission Control
5.0,
5.1,
5.5
5.0,
5.1,
5.5
5.0,
5.1,
5.5
5.0,
5.1,
5.5
das.vmMemoryMinMB
No
Clust
er
das.vmCpuMinMHz
No
Clust
er
das.slotCpuInMHz
Clust
er
das.slotMemInMB
31 | P a g e
Clust
er
5.0,
5.1,
5.5
5.0,
5.1,
5.5
5.0,
5.1,
5.5
das.maxvmrestartcount
Clust
er
das.maxvmrestartperiod
Clust
er
das.maxftvmrestartcount
Clust
er
32 | P a g e
Clust
er
No
Clust
er
5.0,
5.1,
5.5
das.isolationAddressX
Clust
er
5.0,
5.1,
5.5
Clust
er
5.5
Isolation Response
5.1,
5.5
33 | P a g e
No
Clust
er
5.0,
5.1,
5.5
das.isolationShutdownTimeout
Clust
er
5.0,
5.1,
5.5
das.iostatsInterval
Clust
er
Fault Tolerance
5.0,
5.1,
5.5
das.maxFtVmsPerHost
Clust
er
Logging
5.0,
5.1,
5.5
das.config.log.maxFileNum
34 | P a g e
Clust
er
5.0,
5.1,
5.5
das.config.log.maxFileSize
See
das.config.log.maxFileNum
Yes
Clust
er
das.config.log.maxFileSize
Clust
er
Versi Name
on
Description
Reconfigura Type
tion
of
Opti
on
Cluster Configuration
5.0,
5.1,
5.5
vpxd.das.aamMemoryLimit
Yes. HA must VC
be
reconfigured
on all hosts
for which the
change is
required.
5.0,
5.1,
5.5
vpxd.das.electionWaitTimeSec
No. Applied
VC
the next time
a FDM is
configured.
35 | P a g e
fdm.nodeGoodness
When a master
election is held, the
FDMs exchange a
goodness value, and
the FDM with the
largest goodness value
is elected master. Ties
are broken using the
host IDs assigned by
vCenter Server. This
parameter can be used
to override the
computed goodness
value for a given FDM.
To force a specific host
to be elected master
each time an election
is held and the host is
active, set this option
to a large positive
value. This option
should not be specified
at the cluster level.
5.0,
5.1,
5.5
vpxd.das.sendProtectListIntervalSec
Yes. vCenter VC
Server needs
to be
restarted
after setting
this option.
5.5
fdm.cluster.vsanDatastoreLockDelay
No. The
fdm
value is read
when the
master is
elected.
36 | P a g e
vpxd.das.slotMemMinMB
vCenter Server-wide
default value in MB to
use for memory
reservation if no
memory reservation is
specified for a virtual
machine. Setting the
cluster option
das.vmMemoryMinMB
for a cluster will
override this value for
that cluster. If this
option is not set, a
value of zero is
assumed unless
overridden by
das.vmMemoryMinMB.
5.0,
5.1,
5.5
vpxd.das.slotCpuMinMHz
vCenter Server-wide
default value in MHz to
use for cpureservation
if no CPU reservation is
specified for a virtual
machine. Setting the
cluster option
das.vmCPUinMHz for a
cluster will override
this value for that
cluster. If this option is
not set, a value of 32 is
assumed unless
overridden by
das.vmCPUinMHz.
Detecting Failures
5.0,
5.1,
5.5
das.config.fdm.hostTimeout
37 | P a g e
Yes.
Reconfigure
HA on all
hosts.
Clust
er
5.0,
5.1,
5.5
fdm.deadIcmpPingInterval
In ESXi 5.0,
Clust
after making er
a change, HA
must be
reconfigured
on all hosts
in the cluster.
In 5.1 and
later, No
5.0,
5.1,
5.5
das.config.fdm.icmpPingTimeout
In ESXi 5.0,
Clust
after making er
a change, HA
must be
reconfigured
on all hosts
in the cluster.
In 5.1 and
later, No
5.0,
5.1,
5.5
vpxd.das.heartbeatPanicMaxTimeout
Yes, after
setting the
option, HA
needs to be
reconfigured
on all hosts
in all HA
clusters.
VC
Clust
er
5.0,
5.1,
5.5
das.perHostConcurrentFailoversLimit
Clust
er
38 | P a g e
The number of
No
concurrent failovers a
given FDM will have in
progress at one time.
Setting a larger value
will allow more virtual
machines to be
restarted concurrently
but will also increase
the average latency to
power each on since a
greater number adds
more stress on the
hosts and storage. The
default value is 32.
das.config.fdm.ft.cleanupTimeout
Clust
er
5.0,
5.1,
5.5
Clust
er
Reporting
5.0,
das.config.log.outputToFiles
39 | P a g e
Yes
Clust
5.1,
5.5
er
Clust
er
5.0,
5.1,
5.5
das.config.log.directory
5.0,
5.1,
5.5
das.config.fdm.stateLogInterval
Frequency in seconds a
FDM logs a summary
of the cluster state. If
not specified, 600s (10
min) is used.
5.0,
5.1,
5.5
das.config.fdm.event.maxMasterEvents
5.0,
5.1,
5.5
das.config.fdm.event.maxSlaveEvents
In ESXi 5.0
- Yes, HA
must be
reconfigured
on all hosts.
In 5.1 and
later - No
5.0,
5.1,
5.5
vpxd.das.reportNoMasterSec
A vCenter Server
parameter that
determines how long
to wait in seconds
before issuing a cluster
config issue to report
that vCenter Server
was unable to locate
the HA master agent
for the corresponding
cluster. If not specified,
120s is used.
Yes, vCenter VC
Server needs
to be
restarted.
40 | P a g e
Clust
er
Description
6
Click OK.
Admission control is enabled and the policy that you chose takes effect.
41 | P a g e
FDM Fault Domain Manager is responsible for communication between hosts which
are part of the cluster, informing other members about available resource, and VMs
./sbin/services.sh restart
vCenter Server is used to deploy and configure FDM agents inside the cluster.
Manages the election of server elected master. If the whole vCenter server (or just the
To be able to create cluster with ESXi hosts, a vCenter is needed. The most basic
package, vSphere Essentials cannot be used since the limited licensing does not allow you to
create HA cluster as only vSphere Essentials Plus allows you to do that. The essentials
package basicaly allows you manage three hosts from central location which is vCenter
server for essentials.
The Essentials Plus allows you not only create HA cluster, but provides also
vMotion, Enhanced vMotion, and also many other products which are part of the Essentials
Plus bundle:
vSphere Replication (VR) can replicate VMs to another host for DR scenarios
42 | P a g e
hardware failure of the physical hosts enables automatic restart of VMs on another host
in the cluster.
VMware vSphere High Availability Cluster Requirements
There is many requirements for VMware HA. Firs one of them is to have the right VMware
vSphere license, as I mentioned above. Here are other requirements:
Shared Storage youll need some kind of shared storage. I say some kind, since
you can use dedicated storage device (NAS, SAN) or also you can use other (software
based) products which emulates the shared storage, likeStormagics SvSAN,
VMwares vSphere Storage Appliance for Essentials Plus, OR you can also transform
compatibility.
Once you install vCenter Server, configure network of each of your ESXi hosts, you can start
creating your cluster. Each of your hosts should have redundancy assured by using at least
two physical NICs for each network:
management network
vm network
vMotion network
To make this article shorter, Im skipping the network configuration now. The installation of
vCenter server on Windows Server OS is another piece which is not covered in my article as
you can simply use easy install or you can deploy vCenter Server Appliance (vCSA) read
my detailed article : How-to install vCenter Server Appliance (vCSA) and possibly save on
Microsofts licensing
The vCSA has the advantage that its all-in-one prepackaged product , part of the bundle,
and so no need to install the individual components one by one.
Another requirement for creating VMware HA cluster is solid DNS architecture with forward
and reverse zones created and working. If not already done, create on your DNS server the
necessary records now.
Lets create datacenter and cluster now.
To do so, fire up vSphere Client and go to Hosts and clusters
Then, position yourself on the Manage Tab > right click the vCenter server >New
datacenter
43 | P a g e
Once done, you should see new icon appear. I called my Datacenter vladan. -:). Then again,
right click the datacenter you just create, and create new cluster.
While going through the assistant youre asked, if you want to Turn On DRS andTurn On
HA. If youre on the Essentials Plus licensing, youll get a pop-up saying that Essentials Plus
isnts available, or something like this. As the DRS is available only in Enterprise and
Enterprise Plus.
44 | P a g e
If you dont want to activate those options now, you can leave unchecked, and continue the
assistant.
You can do exactly the same steps by using the vSphere Windows Client, as configuration of
VMware vSphere (HA) cluster is still the base element of VMware, and the new vSphere Web
Client only brings new functions like vSphere Enhanced vMotion or deployment and
management of vSphere Replication.
So we have Datacenter, we have cluster. Now we need to add our ESXi hosts to our
cluster. To do so, just follow those steps, right-click (I like right clicking) the HA cluster we
just created > Add Host
As you can see, my hosts FQDN (fully qualified domain name) is esxi5-01.vladan-fr.local
And also, youll receive a security prompt before validating the assistant
45 | P a g e
The last point is to attach a license. In my case, the license has already been entered in
vCenter server, so I can assign that license to that host. When you first installing your hosts
and vCenter server, you have 60 days for entering your license, and here through this
assistant you wan use the Evaluation Mode license. But after 60 days, the VMs will get
disconnected from vCenter and the HA wont function.
Optionally, to reinforce your companys security, you can prevent login directly to the host
and check Enable lockdown mode. Users will be forced to login only through vCenter.
46 | P a g e
Now HA is completely dependent on vMotion to migrate the vms to different host so vMotion
is just used for the migration purpose between multiple hosts. vMotion also has the
capability to migrate any vm without interrupting its state to any of the host inside cluster.
What is storage vMotion?
Storage vMotion is similar to vMotion in the sense that something related to the VM
is moved and there is no downtime to the VM guest and end users. However, with
SVMotion the VM Guest stays on the server that it resides on but the virtual disk for
that VM is what moves.
With Storage vMotion, you can migrate a virtual machine and its disk files from one
datastore to another while the virtual machine is running.
You can choose to place the virtual machine and all its disks in a single location, or
select separate locations for the virtual machine configuration file and each virtual
disk.
During a migration with Storage vMotion, you can transform virtual disks from ThickProvisioned Lazy Zeroed or Thick-Provisioned Eager Zeroed to Thin-Provisioned or the
reverse.
Perform live migration of virtual machine disk files across any Fibre Channel, iSCSI,
FCoE and NFS storage
47 | P a g e
-cclustername
Prints information only about objects in the cluster
clustername.
-help
48 | P a g e
Here DRS stands for Distributed Resource Scheduler which dynamically balances
resource across various host under Cluster or resource pool.
VMware DRS allows users to define the rules and policies that decide how virtual
machines share resources and how these resources are prioritized among multiple
virtual machines.
49 | P a g e
50 | P a g e
51 | P a g e
7. Click OK.
Licensing vCenter Server
To license vCenter Server 5.x:
1. Log in to the vSphere Client.
2. Click Home.
3. Under the Administration section, click vCenter Server settings.
4. Select Assign a new license key to this vCenter Server and click OK.
5. Enter the license key for the vCenter Server and, if necessary, include labels.
6. Click Next and Finish.
How to config cluster?
How to network load balancing?
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1006778
NIC teaming in ESXi and ESX
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1004088
Vmware path selection?
To change the default path selection policy for any new storage for a Storage Array Type
Plug-in (SATP):
In ESXi 5.x:
Run one of these commands:
# esxcfg-info | grep -A1 "Default Path Selection Policy"
# esxcli storage nmp satp list
In ESXi 5.x:
# esxcli storage nmp satp set --default-psp=policy --satp=your_SATP_name
52 | P a g e
Page Sharing: ESXi is able to share memory pages between virtual machines,
eliminating redundant pages.
Ballooning: ESXi can use ballooning to force a VM to give up memory pages that the
guest OS considers least valuable. VMtools is required as it includes the vmmemctl
53 | P a g e
module which makes ballooning possible. The guest OS must also be configured with
sufficient swap space.
Memory Compression: If there is a danger of host level swapping, then ESXi will
use memory compression to reduce the number of pages that it needs to swap out.
Swap to Host Cache: If compression doesnt reduce memory usage sufficiently,
ESXi will reclaim memory by swapping memory pages to the host cache. Host cache is
stored on SSD so is faster than regular swapping (where the files are generally stored on
non-SSD devices).
Regular Swapping: If there is no host cache configured, or it is full ESXi will swap
out pages to the virtual machine swap file. If this occurs then there is likely to be severe
performance degradation.
Bear in mind, that if you choose to store the swapfiles in a specified datastore, rather than
with the virtual machines, then vMotion performance can be degraded. For example, you
may choose to store swapfiles on local datastores.
After updating the cluster settings you need to configure the host setting. This is found
under the Configuration tab for the host. Click on Virtual Machine Swapfile Location:
54 | P a g e
Again, by default the swap files are created in the virtual machines working directory. To
configure a specific swapfile location click Edit:
After choosing the datastore to store swap files on, click ok. This setting is specific to the
host, so you will need to make the change on all hosts in the cluster.
Swap file location can be overridden on a per virtual machine basis by setting the option in
the virtual machines settings:
55 | P a g e
After clicking ok, and refreshing the storage, a bunch of .vswp files will be created on the
datastore:
56 | P a g e
Mem.ShareScanTime
Mem.ShareScanGHz
You can also disable memory sharing on a per-virtual machine basis by setting the following
setting to false:
sched.mem.pshare.enable
This setting is found under Configuration Parameters in the advanced settings for a virtual
machine:
57 | P a g e
Prior to vSphere 5 this value was always set at 6%, however with hosts with a large amount
of memory this meant that a lot of RAM was being unnecessarily reserved for VMkernel
tasks. For example, for a host with 256 GB ram, 30 GB would be reserved.
With vSphere 5, this value is more of a sliding scale. For example rather than always leaving
6% free, for hosts with 4 12 gigabytes of RAM, 4% is kept free for the VMkernel, and for
hosts with more 12 GB ram, 2 % of RAM is kept available.
Whatever the value is set as defines when the host begins to reclaim memory using
ballooning or swapping. Within the free memory, there are a number of thresholds at which
the host will use different methods to reclaim memory. Using figures from vSphere 4 the
following will take place:
58 | P a g e
6
4
2
1
percent
percent
percent
percent
free
free
free
free
considers least valuable. VMtools is required as it includes the vmmemctl module which
makes ballooning possible. The guest OS must also be configured with sufficient swap
space.
Thin provisioning?
Installing ESXiserver in blade server?
Installing ESXi- boot from SAN
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=2052329
What is SRM?
VMware vCenter Site Recovery Manager 5.5 adds the following new features and
improvements.
Use Storage DRS and Storage vMotion on sites that SRM protects:
vSphere Replication supports movement of virtual machines by Storage DRS
and Storage vMotion on the protected site. See Using SRM with vSphere
Replication on Sites with Storage DRS or Storage vMotion.
o Array-based replication supports movement of virtual machines by Storage
DRS and Storage vMotion within a consistency group. See Using SRM with
Array-Based Replication on Sites with Storage DRS or Storage vMotion.
Preserve multiple point-in-time (PIT) images of virtual machines that are protected
with vSphere Replication. See Replicating a Virtual Machine and Enabling Multiple
Point in Time Instances.
o
Protect virtual machines that reside on VMware vSphere Flash Read Cache storage.
vSphere Flash Read Cache is disabled on virtual machines after recovery.
59 | P a g e
VMware Fault Tolerance, when enabled for a virtual machine, creates a live shadow
instance of the primary, running on another physical server.
The two instances are kept in virtual lockstep with each other using VMware
vLockstep technology
The two virtual machines play the exact same set of events, because they get the
exact same set of inputs at any given time.
The two virtual machines constantly heartbeat against each other and if either virtual
machine instance loses the heartbeat, the other takes over immediately. The
heartbeats are very frequent, with millisecond intervals, making the failover
instantaneous with no loss of data or state.
VMware Fault Tolerance requires a dedicated network connection, separate from the
VMware VMotion network, between the two physical servers.
60 | P a g e
VMware FT instantly moves VMs to a new host via vLockstep, which keeps a secondary
VM in sync with the primary, ready to take over at any second, like a Broadway
understudy. The VM's instructions and instruction sequence are the actor's lines, which
pass to the understudy on a dedicated server backbone network. Heartbeats ping
between the star and understudy on this backbone as well, for instantaneous detection of
a failure.
How and when to use VMware FT
So your company's IT resources are mission-critical, and unplanned downtime is out of
the question. Ramp up fault tolerance tools and you're done right? Not so fast. VMware FT
has stringent hardware requirements to take into account when requisitioning server
hardware. Before you plan a fault-tolerant virtualized environment, check out your
options for when and where to use FT.
How do I turn it on FT?
The feature is enabled on a per virtual machine basis. Instructions for enabling Fault
Tolerance can be found in the Turning on Fault Tolerance for Virtual Machines section of
the vSphere Availability Guide for your version of ESXi/ESX.
What happens when I turn on Fault Tolerance?
In very general terms, a second virtual machine is created to work in tandem with the virtual
machine you have enabled Fault Tolerance on. This virtual machine resides on a different
host in the cluster, and runs in virtual lockstep with the primary virtual machine. When a
failure is detected, the second virtual machine takes the place of the first one with the least
possible interruption of service. More specific information about how this is achieved can be
found in the Protecting Mission-Critical Workloads with VMware Fault Tolerance whitepaper.
Why can't I turn Fault Tolerance on?
VMware Fault Tolerance can be enabled on any virtual machine that resides in a cluster that
meets the necessary requirements. If you have difficulty enabling Fault Tolerance for a
specific virtual machine, see The Turn on Fault Tolerance option is disabled (1010631).
How do I turn Fault Tolerance off?
For Instructions on disabling Fault Tolerance, see Disabling or Turning Off VMware FT
(1008026).
How do I tell if my environment is ready for Fault Tolerance?
The VMware SiteSurvey Tool is used to check your environment for compliance with VMware
Fault Tolerance. It can be downloaded from the VMware Shared Utilities page.
Where do I find the product's website?
VMware has a website for Fault Tolerance on the VMware vSphere page.
What happens during a failure?
When a host running the Primary virtual machine fails, a transparent failover occurs to the
corresponding Secondary virtual machine. During this failover, there is no data loss or
noticeable service interruption. In addition, VMware HA automatically restores redundancy
by restarting a new Secondary virtual machine on another host. Similarly, if the host running
61 | P a g e
the Secondary virtual machine fails, VMware HA starts a new Secondary virtual machine on
a different host. In either case there is no noticeable outage.
What is the logging time delay between the Primary and Secondary Fault Tolerance virtual
machines?
The actual delay is based on the network latency between the Primary and
Secondary. vLockstep executes the same instructions on the Primary and Secondary, but
because this happens on different hosts, there could be a small latency, but no loss of state.
This is typically less than 1 millsecond (ms). Fault Tolerance includes synchronization to
ensure that the Primary and Secondary are synchronized.
In a cluster with more than 3 hosts, can you tell Fault Tolerance where to put the Fault
Tolerance virtual machine or does it chose on its own?
You can place the original (or Primary virtual machine). You have full control with DRS or
vMotion to assign it to any node. The placement of the Secondary, when created, is
automatic based on the available hosts. But when the Secondary is created and placed, you
can vMotion it to the preferred host.
What happens if the host containing the Primary virtual machine comes back online (after a
node failure)?
This node is put back in the pool of available hosts. There is no attempt to start or migrate
the Primary to that host.
Is the failover from the Primary virtual machine to the Secondary virtual machine dynamic or
does Fault Tolerance restart a virtual machine?
The failover from the Primary to Secondary virtual machine is dynamic with the Secondary
continuing execution from the exact point where the Primary left off. It happens
automatically with no data loss, no downtime, and little delay. Clients see no interruption.
After the dynamic failover to the Secondary virtual machine, it becomes the new Primary
virtual machine. A new Secondary virtual machine is spawned automatically.
Where are Fault Tolerance failover events logged?
All failover events are logged by vCenter Server.
I encountered an error message that I can't find in the Knowledge Base. Where else should I
check?
The vSphere Availability Guide contains a list of known errors in the Fault
Tolerance Error Messages.
Does Fault Tolerance support Intel Hyper-Threading Technology?
Yes, Fault Tolerance does support Intel Hyper-Threading Technology on systems that have it
enabled. Enabling or disabling Hyper-Threading has no impact on Fault Tolerance.
What happens if vCenter Server is offline when a failover event occurs?
When Fault Tolerance is configured for a virtual machine, vCenter Server need not be online
for FT to work. Even if vCenter Server is offline, failover still occurs from the Primary to the
Secondary virtual machine. Additionally, the spawning of a new Secondary virtual machine
also occurs without vCenter Server.
How many virtual CPUs can I use on a Fault Tolerant virtual machine ?
62 | P a g e
vCenter Server 4.x and vCenter Server 5.x support 1 virtual CPU per protected virtual
machine.
New product features in VMware FT and HA
With a major overhaul to HA in vSphere 5 and murmurs of a soon-to-be-released new
feature, we share some key points to know about VMware FT and HA road maps.
What's new? Faster failover in VMware HA, but no FT for SMP
VMware is planning a new high-availability design for release in 2013, called Virtual Machine
Component Protection. Choosing a VM within a host to vMotion according to specific failover
conditions improves failover.
Unlike HA, VMware FT uses synchronous replication to prevent any service interruption in the
event of a VM failure. Mission-critical applications need fault tolerance, but despite user
interest, FT for symmetric multiprocessing systems (SMP) seems stuck in a VMware preview
purgatory.
High availability in a heartbeat
VMware instituted new intelligence for High Availability in vSphere 5. If the master cluster
becomes unavailable or orphaned from the network, an election process takes over to
prevent false-positive failovers. If a host becomes orphaned from the cluster in vSphere 5's
HA, the storage network is available as backup. The admin can choose their heartbeat data
stores in the HA Clustering dialog boxes.
Goodbye Legato, hello Fault Domain Manager
VMware also revamped the HA architecture in vSphere 5. Fault Domain Manager (FDM) took
over for Legato Automated Availability Manager software, which was frustratingly complex.
Now, admins have one master server, with all other servers in the HA cluster waiting in the
wings to help in the event of a failure. If you're switching to vSphere 5 from an older version,
make sure you have at least two shared data stores between all hosts in the HA cluster.
What other changes can you expect? Heartbeats, simpler log and configuration files, and
installs in under a minute, thanks to FDM.
The nitty gritty of vSphere 5's HA and FT setup
With the move from Legato to FDM comes major HA architecture changes, even if the "look
and feel" will be familiar to legacy users. Learn the responsibilities of Master and Slave hosts
in a cluster. This tip also covers important tips for using FT now that it is properly compatible
with VMware's Distributed Resource Scheduler (DRS).
Difference between HA and DRS?
Difference between FT and SRM ?
Cluster maximum?
What is a snapshot?
63 | P a g e
A snapshot preserves the state and data of a virtual machine at a specific point in time.
The state includes the virtual machines power state (for example, powered-on,
powered-off, suspended).
The data includes all of the files that make up the virtual machine. This includes
disks, memory, and other devices, such as virtual network interface cards.
A virtual machine provides several operations for creating and managing snapshots and
snapshot chains. These operations let you create snapshots, revert to any snapshot in the
chain, and remove snapshots. You can create extensive snapshot trees.
In VMware Infrastructure 3 and vSphere 4.x, the virtual machine snapshot delete
operation combines the consolidation of the data and the deletion of the file. This caused
issues when the snapshot files are removed from the Snapshot Manager, but the
consolidation failed. This left the VM still running on snapshots, and the user may not
notice until the datastore is full.
In vSphere 4.x, an alarm can be created to indicate if a virtual machine was running in
snapshot mode. For more information, see Configuring VMware vCenter Server to send
alarms when virtual machines are running from snapshots (1018029).
In vSphere 5.0, enhancements have been made to the snapshot removal. In vSphere 5.0,
you are informed via the UI if the consolidation part of a RemoveSnapshot or
RemoveAllSnapshots operation has failed. A new option, Consolidate, is available via the
Snapshot menu to restart the consolidation.
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1015180
How to change the default snapshort location in Vmware Esxi 5?
Be default the snapshots which are taken for any vm are stored with their parent in the
same directory or storage. Sometimes you may run out of space and you might not be able
to take anymore snapshots so in that case you can always use some other location for the
storage of snapshots.
But how will you change the default locations of all the snapshots which will be taken for any
vm ?
These are the required steps to be taken:
NOTE: Please ensure that the vm you are working on is powered OFF.
Right Click the vm and select Edit Settings
64 | P a g e
Click on Options from the top TAB, select General and open the Configuration
parameters
65 | P a g e
From the Direct Console User Interface (DCUI). For more information, see About the
Direct Console ESXi Interface in the vSphere 5.5 Installation and Setup Guide.
From the ESXi Shell. For more information, see the Log In to the ESXi Shell section in
the vSphere 5.5 Installation and Setup Guide.
Within an extracted vm-support log bundle. For more information, see Export System
Log Files in the vSphere Monitoring and Performance Guide or Collecting diagnostic
information for VMware ESX/ESXi using the vm-support command (1010705).
From the vSphere Web Client. For more information, see Viewing Log Files with the
Log Browser in the vSphere Web Client in the vSphere Monitoring and Performance
Guide.
/var/log/hostd.log: Host management service logs, including virtual machine and host
Task and Events, communication with the vSphere Client and vCenter Server vpxa
agent, and SDK connections.
66 | P a g e
/var/log/shell.log: ESXi Shell usage logs, including enable/disable and every command
entered. For more information, seevSphere 5.5 Command-Line
Documentation and Auditing ESXi Shell logins and commands in ESXi 5.x (2004810).
/var/log/boot.gz: A compressed file that contains boot log information and can be
read using zcat /var/log/boot.gz|more.
Note: For information on sending logs to another location (such as a datastore or remote
syslog server), see Configuring syslog on ESXi 5.0 (2003322).
Logs from vCenter Server Components on ESXi 5.1 and 5.5
When an ESXi 5.1 / 5.5 host is managed by vCenter Server 5.1 and 5.5, two components are
installed, each with its own logs:
/var/log/fdm.log: vSphere High Availability logs, produced by the fdm service. For
more information, see the vSphere HA Security section of the vSphere Availability
Guide.
67 | P a g e
Check if LACP is enabled on DVS for version 5.x and above. For more
information, see vSphere 5.0 Networking Guide.
If the issue is not resolved, and you have to restart all the services that are a
part of the services.sh script, take a downtime before proceeding to the script.
Note: For more information about restarting the management service on an
ESXi host, see Service mgmt-vmware restart may not restart hostd in ESX/ESXi
(1005566).
1. Log in to your ESX host as root from either an SSH session or directly from the
console.
2. Run this command:
service mgmt-vmware restart
Caution: Ensure Automatic Startup/Shutdown of virtual machines is disabled before
running this command or you risk rebooting the virtual machines. For more
information, see Restarting hostd (mgmt-vmware) on ESX hosts restarts hosted
68 | P a g e
3. Verify that VMkernel networking configuration is valid. For more information, see
ESX/ESXi power on error: Unable to set VMkernel gateway as there are no VMkernel
interfaces on the same network (1002662) .
4. Verify that the virtual machine is not configured to use a device that is not valid on
the target host. For more information, see
Troubleshooting migration compatibility
error: Device is a connected device with a remote backing (1003780).
5. If Jumbo Frames are enabled (MTU of 9000) (9000 -8 bytes (ICMP header) -20 bytes
(IP header) for a total of 8972), ensure thatvmkping is run like vmkping -d -s 8972
<destinationIPaddress>. You may experience problems with the trunk between two
physical switches that have been misconfigured to an MTU of 1500.
6. Verify that Name Resolution is valid on the host. For more information, see Identifying
issues with and setting up name resolution on ESX/ESXi Server (1003735).
7. Verify that Console OS network connectivity exists. For more information, see Testing
network connectivity with the ping command (1003486).
8. Verify if the ESXi/ESX host can be reconnected or if reconnecting the ESX/ESXi host
resolves the issue. For more information, see KB article Changing an ESXi or ESX
host's connection status in vCenter Server (1003480)
69 | P a g e
9. Verify that the required disk space is available. For more information,
see Investigating disk space on an ESX or ESXi host (1003564) .
10. Verify that time is synchronized across environment. For more information,
see Verifying time synchronization across an ESX/ESXi host environment (1003736).
11. Verify that valid limits are set for the virtual machine being vMotioned. For more
information, see VMware vMotion fails if target host does not meet reservation
requirements (1003791).
12. Verify that hostd is not spiking the console. For more information, see Checking for
resource starvation of the ESX Service Console (1003496).
13. This issue may be caused by SAN configuration. Specifically, this issue may occur if
zoning is set up differently on different servers in the same cluster.
14. Verify and ensure that the log.rotateSize parameter in the virtual machine's
configuration file is not set to a very low value. For more information, see vMotion
fails at 10% with the error: Operation timed out (2007343).
Note: If the issue still exists after trying the steps in this article:
Gather the VMware Support Script Data. For more information, see Collecting
diagnostic information in a VMware Virtual Infrastructure Environment (1003689).
File a support request with VMware Support and note this KB Article ID in the problem
description. For more information, see How to Submit a Support Request.
Static Binding
Dynamic Binding
Ephemeral Binding
Static binding
When you connect a virtual machine to a port group configured with static binding, a port is
immediately assigned and reserved for it, guaranteeing connectivity at all times. The port is
disconnected only when the virtual machine is removed from the port group. You can
connect a virtual machine to a static-binding port group only through vCenter Server.
Note: Static binding is the default setting, recommended for general use.
Dynamic binding
In a port group configured with dynamic binding, a port is assigned to a virtual machine only
when the virtual machine is powered on and its NIC is in a connected state. The port is
disconnected when the virtual machine is powered off or the virtual machine's NIC is
70 | P a g e
disconnected. Virtual machines connected to a port group configured with dynamic binding
must be powered on and off through vCenter.
Dynamic binding can be used in environments where you have more virtual machines than
available ports, but do not plan to have a greater number of virtual machines active than
you have available ports. For example, if you have 300 virtual machines and 100 ports, but
never have more than 90 virtual machines active at one time, dynamic binding would be
appropriate for your port group.
Note: Dynamic binding is deprecated from ESXi 5.0, but this option is still available in
vSphere Client. It is strongly recommended to use Static Binding for better performance.
Ephemeral binding
In a port group configured with ephemeral binding, a port is created and assigned to a
virtual machine by the host when the virtual machine is powered on and its NIC is in a
connected state. The port is deleted when the virtual machine is powered off or the virtual
machine's NIC is disconnected.
You can assign a virtual machine to a distributed port group with ephemeral port binding on
ESX/ESXi and vCenter, giving you the flexibility to manage virtual machine connections
through the host when vCenter is down. Although only ephemeral binding allows you to
modify virtual machine network connections when vCenter is down, network traffic is
unaffected by vCenter failure regardless of port binding type.
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1022312
how to change the root password in esxi host?
1. Log into the ESXi/ESX host service console, either via SSH or the physical console.
2. If you did not log in as root, you must acquire root privileges by running the
command:
su Enter the current root password when prompted.
3. Change the root password by executing:
passwd root
4. Enter the new root password, and press Enter. Enter the password a second time to
verify. You are warned about, but not prevented from using, bad passwords.
If you make a mistake when typing or retyping the new root password, you must start
over. For example:
# passwd root
Changing password for user root.
New UNIX password:
Retype new UNIX password:
Sorry, passwords do not match
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
#
Not able to access the server via putty?
71 | P a g e
Symptoms
When connecting to the ESXi host using putty, you see the error:
Network error: Connection Refused
You have enabled Tech Support Mode. For more information, see Using Tech Support
Mode in ESXi 4.1 and ESXi 5.0 (1017910).
Cause
This issue may occur if the /etc/inetd.conf file is empty or does not contain the correct
settings for remote shell access and VMware authentication daemon.
Note: In ESXi 5.0, the inetd.conf file is located at /var/run/.
Resolution
To resolve this issue:
1. Connect to the ESXi console directly using a Kernel-based Virtual Machine (KVM) ,
such as iLO, iLOM, DRAC, RSA, or IP KVM, pressALT+F1, and then log in as root.
2. Open the inetd.conf file using a text editor.
To open the file using the vi editor, run this command:
# vi /etc/inetd.conf
3. Ensure that the contents of the /etc/inetd.conf file are similar to:
# Internet server configuration database
# Remote shell access
ssh
stream tcp nowait root /sbin/dropbearmulti dropbear +
+min=0,swap,group=shell -i -K60
ssh
stream tcp6 nowait root /sbin/dropbearmulti dropbear +
72 | P a g e
+min=0,swap,group=shell -i -K60
In ESXi 5.0, the contents under Remote shell access appear similar to:
ssh stream tcp nowait root /usr/lib/vmware/openssh/bin/sshd sshd +
+swap,group=host/vim/vimuser/terminal/ssh -i
ssh stream tcp6 nowait root /usr/lib/vmware/openssh/bin/sshd sshd +
+swap,group=host/vim/vimuser/terminal/ssh -i
# VMware authentication daemon
authd stream tcp nowait root /sbin/authd
authd stream tcp6 nowait root /sbin/authd
authd
authd
5. Run this command to restart the SSH daemon for the changes to take effect:
# /etc/init.d/TSM-SSH restart
Note: In ESXi 5.x, run this command:
/etc/init.d/SSH restart
Vmware update manager process?
How P2v Works?
http://www.experts-exchange.com/Software/VMWare/A_12358-HOW-TO-P2V-V2V-for-FREEVMware-vCenter-Converter-Standalone-5-5.html
Vmware converter port number?
VMware vCenter Converter fails if one or more required ports are blocked. Follow the section
that matches your conversion scenario.
In this article, the following terms are used:
Source computer
Converter server
Converter client
VirtualCenter
73 | P a g e
Fileshare path
Standalone virtual
machine
Helper virtual machine When converting a powered on Linux operating system (P2V), this is
the target virtual machine that is being used temporarily for the
purpose of copying files from the source computer. It uses the
TCP/IP information that is entered in the Converter wizard for the
target virtual machine. Make sure that this IP address can
communicate directly with the source computer.
Notes:
If you perform a corrective action, determine if the problems initially encountered are
still being experienced.
To test TCP port connectivity use the telnet command. For more information,
see Testing port connectivity with Telnet (1003487).
To test UDP port connectivity from Linux or MacOS use the traceroute command. For
more information, see a traceroute man page.
To test UDP port connectivity from Windows use the Portqry utility. For more
information, see the Microsoft Knowledge Base article 310099.
Note: These links were correct as of March 15, 2009. If you find a link is broken,
provide feedback and a VMware employee will update the link.
74 | P a g e
Notes
Converter
server
Source
computer
445, 139,
9089
or 9090
Converter
server
Converter
client
Converter
server
443
Source
computer
ESX/ESXi
443, 902
Destination
Converter server
Source computer
22
Converter client
Converter server
443
Converter server
VirtualCenter
443, 902
Converter server
ESX/ESXi
443, 902,
903
Converter server
Helper virtual
machine
443
75 | P a g e
Helper virtual
machine
Source computer
22
Converter
server
Fileshare
path
445, 139
Notes
Converter
server
443
Converter
server
Converter
server
ESX/ESXi
443, 902
76 | P a g e
1. Log into the VMware ESX/ESXi host as the root user. Verify that the virtual machine
does not have any snapshots by going into the virtual machine's directory and
looking for Delta files. Run the command:
#ls -lah /vmfs/volumes/datastore_name/vm_name/*delta*
-rw------- 1 root root 1.8G Oct 10 10:58 vm_name-000001-delta.vmdk
Note: For more information on logging into the ESX/ESXi, see the following:
For more information on VMware ESXi Technical Support Mode, Tech Support
Mode for Emergency Support (1003677).
For more information on VMware ESXi 4.1 and ESXi 5.0 Technical Support
Mode, Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).
2. If the virtual machine does have snapshots, commit them using these commands:
#vmware-cmd -l /vmfs/volumes/datastore_name/vm_name/vm_name.vmx
#vmware-cmd
/vmfs/volumes/datastore_name/vm_name/vm_name.vmx removesnapshotsremovesn
apshots() = 1
Note: For committing snapshots on an ESXi 5.1 or later host, see Committing
snapshots on ESXi host from command line (1026380).
To expand the VMDK using the vmkfstools -X command, run the command:
#vmkfstools -X <New Disk Size> <VMDK to extend>
#vmkfstools -X 30G /vmfs/volumes/datastore_name/vm_name/vm_name.vmdk
Note: Ensure that you point to the vm_name.vmdk, and not to the vm_nameflat.vmdk. Using vmkfstools -X is the only option to expand an IDE virtual disk .
77 | P a g e
6. To extend the C: partition, find a helper virtual machine and attach the disk from the
first virtual machine to the helper.
To add an existing virtual disk to the helper virtual machine:
a. Go to the Edit Settings menu of the virtual machine.
b. Click Add > Hard Disk > Use Existing Virtual Disk.
c. Navigate to the location of the disk and select to add it into the virtual
machine.
Note: A helper virtual machine is a virtual machine that has the same
operating system to which you attach the disk.
2
3
In Windows 2008, click Start > Computer Management > Disk Manager, rightclick on the partition and select Extend Volume. For more information, see the
Microsoft Knowledge Base article 325590.
Note: The preceding links were correct as of March 14, 2013. If you find a link is
78 | P a g e
broken, provide feedback and a VMware employee will update the link.
5
6
Power off and detach the disk from the helper virtual machine. Keep all default
settings and do not delete the VMDK from the disk.
Power on the first virtual machine and verify the disk size change.
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1007266
Increase the datastore size?
Symptoms
When you try to grow or expand a VMFS volume, you experience these symptoms:
One or more storage devices have been increased in capacity from the storage array.
When you click Increase, a device is listed but does not have Expandable = Yes.
When you select the volume and click Next, you see the error:
failed to update disk partition information
Purpose
This article provides the steps to increase the capacity (grow or extend) a VMFS datastore
successfully.
Note: Increasing the size of the backing storage device on the storage array is outside the
scope of this article, and a prerequisite before modifying the size of the VMFS Datastore
filesystem. VMware vSphere cannot modify the size of a LUN or other storage device on the
array. Modifying the size of an array device must be done using the storage array vendor's
management tools. For more information, contact the storage array vendor.
Note: This method only works for non-Local non-Boot devices. For Local VMFS datastores,
see Growing a local datastore from the command-line in vSphere ESX 4.x
(1009125) or Growing a local datastore from the command line in vSphere ESXi 4.x and 5.x
(2002461).
Resolution
To increase the capacity of a VMFS datastore:
1. In vCenter Server, select the Datastores view.
2. Select the datastore you want to grow and identify the host that has more virtual
machines running on it.
79 | P a g e
3. Open another vSphere client that connects directly to the ESX host.
4. Go to Configuration > Storage adapters and perform a rescan. For more
information, see Performing a rescan of the storage on an ESX/ESXi host (1003988).
5. Go to Configuration > Storage, click the datastore that you want to grow, and
click Properties.
6. Ensure that the new size of the device is listed in the Extent Device list. If the
increased size is not reflected, review the changes on the storage array and rescan
again.
7. Click Increase.
8. Select a device from the list of storage devices for which the Expandable column is
Yes and click Next.
9. Set the capacity for the extent. The default capacity for the extent is the entire free
space on the storage device. VMware recommends you to use the default setting.
10. Click Next.
11. After the process completes, go to vCenter Server, right-click the cluster that sees
the expanded datastore, and click Rescan for Datastores. For more information,
see Performing a rescan of the storage on an ESX/ESXi host (1003988).
12. If there are other hosts that see the expanded datastore, perform a rescan on these
hosts also.
Note: If the LUN experiences a high I/O throughput when growing the VMFS, the ESX host
may not be able to complete the operation. In such a case, repeat the process during nonbusiness hours and when backup operations are not running. If the problem persists, power
off some of the virtual machines residing on the LUN and then retry.
For more information, see Adding an extent to a VMFS volume fails after increasing local
storage space (1002821).
Additional Information
Notes:
If a shared datastore has powered on virtual machines and becomes 100% full, you
can increase the datastores capacity only from the host on which the powered-on
virtual machines are registered.
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1017662
http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=2002461
.vmfs version?
80 | P a g e
cloning?
how to remove the host from the cluster?
To move an ESXi/ESX host from one VirtualCenter Server/vCenter Server to another, remove
the host from VirtualCenter Server/vCenter Server, then add the host to a new VirtualCenter
Server/vCenter Server. This operation will not affect the state of any virtual machines
running on the ESXi/ESX, the historical performance data of both the ESXi/ESX and its virtual
machines will however be purged.
Removing the ESXi/ESX host from VirtualCenter Server/vCenter Server
To remove the ESXi/ESX host from VirtualCenter Server/vCenter Server:
1. If the managed host is in a cluster, right-click the cluster. Set the Distributed
Resource Scheduler (DRS) mode to manual anddisable VMware High Availability by
deselecting Configure HA.
2. Click OK and wait for the reconfiguration to complete.
3. Click Inventory in the navigation bar, expand the inventory as needed, and click the
appropriate managed host.
4. Right-click the managed host icon in the inventory panel and choose Disconnect (wait
for the task to complete).
5. Right-click the managed host icon in the inventory panel and choose Remove.
6. Click Yes to confirm that you want to remove the managed host and all its associated
virtual machines.
Adding the ESXi/ESX host to a new VirtualCenter Server/vCenter Server
To add the ESXi/ESX host to a new VirtualCenter Server/vCenter Server:
1. Connect VMware Infrastructure Client/vSphere Client/vSphere Web Client to the new
VirtualCenter Server/vCenter Server.
2. Click Inventory in the navigation bar.
3. Expand the inventory as needed, and click the appropriate datacenter or cluster.
4. Click File > New > Add Host.
5. In the first page of the Add Host wizard, enter the name or IP address of the managed
host in the Host name field.
6. Enter the username and password for a user account that has administrative
privileges on the selected managed host.
7. Click Next.
81 | P a g e
ProLiant Smart Array Controller ) driver affecting ESXi 5.5. It is causing memory leak
associated with device rescans resulting in out of memory conditions and a potential PSOD.
HP has released the latest version of HPSA. HP has released the latest version of HPSA driver
(5.5.0.60-1) for vSphere 5.5.
Below issues were addressed as part of this latest HPSA driver version
Fixed a memory leak associated with device rescans resulting in out of memory
conditions and a potential PSOD.
Fixed a null pointer dereference in error handling code that can cause a PSOD in rare
cases when device inquiries fail.
Restore LUN numbering policy to start with 1 instead of 0, avoiding potential issues
with Raw Device Maps.
Improve null pointer checks in device rescanning code, avoiding a potential PSOD.
82 | P a g e
83 | P a g e
1. Connect to a server or vCenter, open server Configuration tab, under Hardware select
Storage Adapters:
You can also copy WWNN (World Wide Node Name) and WWPN (World Wide Port Name)
2. How to find HBA WWN via ESXi Shell / CLI:
VMware vSphere ESXi 5.0+:
~ # esxcli storage core adapter list
HBA Name Driver
Link State UID
Description
-------- ------------ ---------- ----------------------------------------------------------------------------------------------vmhba0 megaraid_sas link-n/a unknown.vmhba0
(0:1:0.0) LSI / Symbios
Logic MegaRAID SAS SKINNY Controller
vmhba1 fnic
link-up
fc.20000025b5020110:20000025b502a121 (0:8:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
vmhba2 fnic
link-up
fc.20000025b5020110:20000025b502a120 (0:9:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
VMware ESX/ESXi 2.1.0 4.1.x:
~ # esxcfg-scsidevs -a
vmhba0 megaraid_sas
link-n/a unknown.vmhba0
(0:1:0.0) LSI / Symbios
Logic MegaRAID SAS SKINNY Controller
vmhba1 fnic
link-up fc.20000025b5020110:20000025b502a121 (0:8:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
vmhba2 fnic
link-up fc.20000025b5020110:20000025b502a120 (0:9:0.0) Cisco
Systems Inc Cisco VIC FCoE HBA Driver
OR
Connect to ESXi shell either via putty/SSH or DCUI (Direct Console User Interface) /
server console
~ # ls /proc/scsi/
mptsas qla2xxx
Look for a folder like qla2xxx QLogic HBA, lpfc820 Emulex HBA, bnx2i
Brocade HBA;
84 | P a g e
Run ls /proc/scsi/qla2xxx. You will get a list of files, named by a number. Each file
contains information about one HBA;
~ # ls /proc/scsi/qla2xxx/
6 7
Now run cat /proc/scsi/qla2xxx/6 to get full info on the HBA. Alternatively, run the
following commands:
o
scsi-qla0-adapter-node=20000024ff31f0c8:000000:0;
scsi-qla0-adapter-port=21000024ff31f0c8:000000:0;
3. Powershell script to list host name, vmhba number, HBA model / driver and World Wide
Port Name (WWN):
$scope = Get-VMHost
Write-Host `t $hba.Device, "|", $hba.model, "|", "World Wide Port Name:" $wwpn
}}
Result:
?
85 | P a g e
Host: ESXi5-001.vstrong.info
vmhba1 | Cisco VIC FCoE HBA Driver | World Wide Port Name: 20000025b502a101
vmhba2 | Cisco VIC FCoE HBA Driver | World Wide Port Name: 20000025b502a100
Unidirectiona
l CHAP
Bidirection
al CHAP
For software and dependent hardware iSCSI adapters, you can set unidirectional CHAP and
bidirectional CHAP for each adapter or at the target level. Independent hardware iSCSI
supports CHAP only at the adapter level.
When you set the CHAP parameters, specify a security level for CHAP.
Note
When you specify the CHAP security level, how the storage array responds depends on the
arrays CHAP implementation and is vendor specific. For information on CHAP authentication
behavior in different initiator and target configurations, consult the array documentation.
CHAP Security Level
Description
Supported
None
Software iSCSI
86 | P a g e
Software iSCSI
Software iSCSI
Software iSCSI
Dependent hardware iSCSI
Independent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
ESX command line use the command line to obtain the multipath information when
performing troubleshooting procedures.
VMware Infrastructure/vSphere Client use this option when you are performing
system maintenance.
87 | P a g e
naa.60060480000290301014533030303130
Runtime Name: vmhba1:C0:T0:L0
Device: naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk
(naa.60060480000290301014533030303130)
Adapter: vmhba1
Channel: 0
Target: 0
LUN: 0
Plugin: NMP
State: active
Transport: fc
Adapter Identifier: fc.5001438005685fb7:5001438005685fb6
Target Identifier: fc.5006048c536915af:5006048c536915af
Adapter Transport Details: WWNN: 50:01:43:80:05:68:5f:b7 WWPN:
50:01:43:80:05:68:5f:b6
Target Transport Details: WWNN: 50:06:04:8c:53:69:15:af WWPN:
50:06:04:8c:53:69:15:af
3. Type esxcli storage core path list -d <naaID> to list the detailed information of the
corresponding paths for a specific device.
The command esxcli storage nmp device list lists of LUN multipathing information:
naa.60060480000290301014533030303130
Device Display Name: EMC Fibre Channel Disk
(naa.60060480000290301014533030303130)
Storage Array Type: VMW_SATP_SYMM
Storage Array Type Device Config: SATP VMW_SATP_SYMM does not support device
configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config:
{preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T1:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T1:L0
Notes:
For information on multipathing and path selection options, see Multipathing policies
in ESX/ESXi 4.x and ESXi 5.x (1011340).
For more information, see the 5.5 Command Line Reference Guide
vSphere Client
To obtain multipath settings for your storage in vSphere Client:
1. Select an ESX/ESXi host, and click the Configuration tab.
2. Click Storage.
88 | P a g e
89 | P a g e
For each resource pool, you specify reservation, limit, shares, and whether the reservation
should be expandable. The resource pool resources are then available to child resource pools
and virtual machines.
Vmware user and admin rights?
How to Create a non-root account with Administrator capabilities on ESX
As per the ESX Server Configuration Guide:
1. To add a user to the Users Table.
a. Log in to the host using the vSphere Client, using the root userid.
b. Click the Local Users & Groups tab and click Users.
c. Right-click anywhere in the Users table and click Add to open the Add New
User dialog.
d. Enter a login name, a user name, and a password.
Note: The vSphere Client automatically assigns the next available UID to the
user on the ESX host. You can over-write the populated field.
e. Create a password that meets the length and complexity requirements.
However, the ESX host checks for password compliance only if you have
switched to the pam_passwdqc.so plug-in for authentication. The password
settings in the default authentication plug-in, pam_cracklib.so, are not
enforced. To allow a user to access the ESX host through a command shell,
select Grant shell access to this user.
f.
In general, do not grant shell access unless the user has a justifiable need.
Users that access the host only through the vSphere Client do not need shell
access.
g. To add the user to a group, select the group name from the Group drop-down
menu and click Add.
h. Click OK
2. To select the Permissions tab, also in the local host vSphere client session, and then:
3.
a. Right click "Add Permissions"
b. select Administrator from the Assigned Role drop-down box
90 | P a g e
At this point, you should now be able to login to the ESX host using that user,
and the vSphere client.
https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc
%2FGUID-670B9B8C-3810-4790-AC83-57142A9FE16F.html
Esx and ESXi architecture ?
Feature Summary
VMware vSphere 5.5 is the latest release of the flagship virtualization platform from
VMware. VMware vSphere, known in many circles as "ESXi", for the name of the underlying
hypervisor architecture, is a bare-metal hypervisor that installs directly on top of your
physical server and partitions it into multiple virtual machines. Each virtual machine shares
the same physical resources as the other virtual machines and they can all run at the same
time. Unlike other hypervisors, all management functionality of vSphere is possible through
remote management tools. There is no underlying operating system, reducing the install
footprint to less than 150MB.
Improved Reliability and Security
The ESXi bare-metal hypervisors management functionality is in VMkernel, reducing the
footprint to 150 MB. This gives it a very small attack surface for malware and over-thenetwork threats, improving reliability and security.
Streamlined Deployment and Configuration
With few configuration options and simple deployment and configuration, the ESXi
architecture makes it easy to maintain a consistent virtual infrastructure.
Reduced Management Overhead
vSphere ESXi uses an agentless approach to hardware monitoring and system management
with an API-based partner integration model. Management tasks are on remote command
lines with the vSphere Command Line Interface (vCLI) and Power CLI, which uses Windows
PowerShell cmdlets and scripts for automated management.
Simplified Hypervisor Patching and Updating
Fewer patches mean smaller maintenance windows and fewer scheduled maintenance
windows.
What is service console?
Its time for another post in my all-new back to basics series. Thats my term for wiping
down my lab environment and deploying vSphere5.5 and trying to reacquaint myself with
all that vSphere knowledge that was once at my finger tips. This time its the turn of the
DCUI.
91 | P a g e
The Direct Console User Interface (DCUI) is the front-end management system that allows
for some basic configuration changes and troubleshooting options should the VMware ESXi
host become unmanageable via conventional tools such as the vSphere Client or vCenter.
Typical administration tasks include:
Configure, Restart, Test and Restore the VMware ESX Management Network
Configure Keyboard
Troubleshoot
Most actions are carried out by using [F2] on the keyboard or [F11] confirm changes, along
with typical options such as [Y] and [N] to various system prompts. Before carrying out any
task you will be required to supply the root password. However, the first law of security is
to secure the physical server so take care to ensure your access to ILO/RAC/BMC interfaces
are properly secured. Although the VMware ESX host can be rebooted from the DCUI this is
regarded as an action of last resort. If the VMware ESX hosts has running VMs these will
crash, and may or may not be restarted on other hosts depending on whether they are part
of a cluster.
http://blogs.vmware.com/smb/2013/12/back-to-basics-managing-vmware-esxi-5-5-directuser-interface-dcui.html
https://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vcli.migration.doc
%2Fcos_upgrade_technote.1.1.html
What is vmkernel ?
How to upgrade the ESX server?
Purpose
This article provides best practice information about installing or upgrading to ESXi 5.5.
92 | P a g e
Notes:
This article assumes that you have read the vSphere Installation and Setup Guide for
ESXi 5.5 Installation or the vSphere Upgrade Guide for ESXi 5.5 upgrades. These
guides contain definitive information. If there is a discrepancy between the guide and
this article, assume that the guide is correct.
vCenter Server is upgraded to version 5.5 before upgrading ESXi to version 5.5
VMware provides several ways to install or upgrade to ESXi 5.5 hosts. For more
information, see:
Note: These methods include Interactive ESXi Installation, Scripted ESXi Installation,
and Customizing Installations with ESXi Image Builder CLI.
ESXi 5.5 System Requirements
When installing or upgrading to ESXi 5.5, ensure that the host meets these minimum
hardware configurations supported by ESXi 5.5:
System compatibility
Storage compatibility
2. You have a 64-bit processor. VMware ESXi 5.5 only installs and runs on servers with
64-bit x86 CPUs. It also only supports LAHF and SAHF CPU instructions.
3. You have an ESXi 5.5 host machine with at least two cores.
4. The NX/XD bit is enabled for the CPU in the BIOS.
93 | P a g e
For Intel Xeon-based systems, the processors must include support for Intel
Virtualization Technology (VT). Many servers that include CPUs with VT
support might have VT disabled by default, so you must enable VT manually. If
your CPUs support VT , but you do not see this option in the BIOS, contact
your vendor to request a BIOS version that lets you enable VT support.
Note: To determine whether your server has 64-bit VMware support,
download the CPU Identification Utility from the VMware Website.
8. You have one or more Gigabit or 10GB Ethernet controllers. For a list of supported
network adapter models, see the VMware Compatibility Guide.
9. You have Storage controllers with any combination of one or more of:
RAID controllers. Dell PERC (Adaptec RAID or LSI MegaRAID), HP Smart Array
RAID, or IBM (Adaptec) ServeRAID controllers.
10. You have SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the
virtual machines.
11. For Serial ATA (SATA), a disk connected through supported SAS controllers or
supported on-board SATA controllers. SATA disks are considered to be remote, not
local. These disks are not used as a scratch partition by default because they are
seen as remote.
Note: You cannot connect a SATA CD-ROM device to a virtual machine on an ESXi 5.5
host. To use the SATA CD-ROM device, you must use IDE emulation mode.
12. You are using a supported storage system. ESXi 5.5 supports installing on and
booting from these storage systems:
94 | P a g e
SATA disk drives. SATA disk drives connected behind supported SAS controllers
or supported on-board SATA controllers.
LSI1068E (LSISAS3442E)
LSI1068 (SAS 5)
Intel ICH9
NVIDIA MCP55
ServerWorks HT1000
Note: ESXi does not support using local, internal SATA drives on the host server to
create VMFS datastores that are shared across multiple ESXi hosts.
13. You have Serial Attached SCSI (SAS) disk drives supported for installing ESXi 5.5 and
for storing virtual machines on VMFS partitions.
14. You have dedicated SAN disk on Fibre Channel or iSCSI.
15. You have USB devices that are supported for installing ESXi .
16. You can install and boot ESXi from an FCoE LUN using VMware software FCoE
adapters and network adapters with FCoE offload capabilities. See the vSphere
Storage documentation for information about installing and booting ESXi with
software FCoE.
ESXi booting requirements
vSphere 5.5 supports booting ESXi hosts from the Unified Extensible Firmware Interface
(UEFI). With UEFI you can boot systems from hard drives, CD-ROM drives, or USB media.
Network booting or provisioning with VMware Auto Deploy requires the legacy BIOS firmware
and is not available with UEFI.
ESXi can boot from a disk larger than 2TB, provided that the system firmware and the
firmware on any add-in card that you are using support it. For more information, see the
vendor documentation.
Note: Changing the boot type from legacy BIOS to UEFI after you install ESXi 5.5 may cause
the host to fail to boot. In this case, the host reports an error similar to:
Not a VMware boot bank. Changing the host boot type between legacy BIOS and UEFI is not
supported after you install ESXi 5.5.
Storage requirements
ESXi 5.5 has these storage requirements:
95 | P a g e
Installing ESXi 5.5 requires a boot device that is minimum 1GB in size. When booting
from a local disk or SAN/iSCSI LUN, a 5.2GB disk is required to allow the creation of
the VMFS volume and a 4GB scratch partition on the boot device. If a smaller disk or
LUN is used, the installer attempts to allocate a scratch region on a separate local
disk. If a local disk cannot be found, the scratch partition (/scratch) is located on the
ESXi host ramdisk, linked to /tmp/scratch. You can reconfigure /scratch to use a
separate disk or LUN. For best performance and memory optimization, VMware
recommends that you do not leave /scratch on the ESXi host ramdisk.
To reconfigure /scratch, see Set the Scratch Partition from the vSphere Client in
the vSphere Installation and Setup Guide.
When installing ESXi onto a USB flash drive or SD flash card, if the drive is less than 8
GB, this prevents the allocation of a scratch partition onto the flash device. VMware
recommends using a retail purchased USB flash drive of 16 GB or larger so that the
"extra" flash cells can prolong the life of the boot media but high quality parts of 4 GB
or larger are sufficient to hold the extended coredump partition.
Due to the I/O sensitivity of USB and SD devices, the installer does not create a
scratch partition on these devices. When installing on USB or SD devices, the installer
attempts to allocate a scratch region on an available local disk or datastore. If no
local disk or datastore is found, /scratch is placed on the ramdisk. You should
reconfigure /scratch to use a persistent datastore following the installation.
For environments that boot from a SAN or use Auto Deploy, it is not necessary to
allocate a separate LUN for each ESXi host. You can co-locate the scratch regions for
many ESXi hosts onto a single LUN. The number of hosts assigned to any single LUN
should be weighed against the LUN size and the I/O behavior of the virtual machines.
96 | P a g e
1. Ensure that your current ESX/ESXi version is supported for migration or upgrade. For
more information, see Supported Upgrades to ESXi 5.5 in the vSphere Upgrade
Guide.
2. Ensure that your system hardware complies with above ESXi requirements. For more
information, see the System Requirementssection in the vSphere Upgrade Guide and
the VMware Compatibility Guide. Check for system compatibility, I/O compatibility
(network and HBA cards), storage compatibility, and backup software compatibility.
3. Ensure that sufficient disk space is available on the host for the upgrade or migration.
Migrating from ESX 4.x to ESXi 5.x requires 50MB of free space on your VMFS
datastore.
4. If a SAN is connected to the host, detach the fibre before continuing with the upgrade
or migration. Do not disable HBA cards in the BIOS.
Note: This step does not apply to ESX hosts that boot from the SAN and have the
Service Console on the SAN LUNs. You can disconnect LUNs that contain the VMFS
datastore and do not contain the Service Console.
VMware strongly recommends that you back up your host before performing an
upgrade or migration, so that, if the upgrade fails, you can restore your host.
Important: After upgrading or migrating your host to ESXi 5.x, you cannot roll back
to the earlier version.
Depending on the upgrade or migration method you choose, you may have to
migrate or power off all virtual machines on the host.
After the upgrade or migration, test the system to ensure that the upgrade or
migration completed successfully.
Reapply your host licenses. For more information, see the Reapplying Licenses After
Upgrading to ESXi 5.5 section in the vSphereUpgrade Guide.
Consider setting up a syslog server for remote logging, to ensure sufficient disk
storage for log files. Setting up logging on a remote host is especially important for
hosts with limited local storage. Optionally, you can install the vSphere Syslog
Collector to collect logs from all hosts. See Providing Sufficient Space for System
Logging. For information about setting up and configuring syslog and a syslog server,
setting up syslog from the host profiles interface, and installing vSphere Syslog
Collector, see the vSphere Installation and Setup Guide.
If the upgrade or migration was unsuccessful, you can restore your host if you have a
valid backup.
97 | P a g e
vCenter Server does not permit the addition of hosts that cannot be automatically
configured to be compatible with the EVC baseline.
Table 1.1: Description of Intel EVC Baselines
EVC EVC Baseline
Leve
l
Description
L0
Intel "Merom"
Applies baseline feature set of Intel "Merom" Generation (Intel
Gen. (Intel Xeon Xeon Core 2) processors to all hosts in the cluster.
Core 2)
L1
Intel "Penryn"
Gen. (formerly
Intel Xeon 45nm
Core 2)
L2
Intel "Nehalem"
Gen. (formerly
Intel Xeon
Core i7)
L3
Intel "Westmere"
Gen. (formerly
Intel Xeon 32nm
Core i7)
L4
Intel "Sandy
Bridge" Generation
L5
Note: In vCenter Server 5.1 and 5.5, the Intel "Ivy Bridge" Generation option is only
displayed in the Web Client.
Table 1.2: Description of AMD EVC Baselines
EVC EVC Baseline
Leve
98 | P a g e
Description
l
A0
AMD Opteron
Generation 1
A1
AMD Opteron
Generation 2
A3
AMD Opteron
Generation 3
A2,
B0
AMD Opteron
Generation 3 (no
3DNow!)
B1
AMD Opteron
Generation 4
B2
AMD Opteron
"Piledriver"
Generation
Note: In vCenter Server 5.1 and 5.5, the AMD Opteron "Piledriver" Generation option is
only displayed in the Web Client.
It is often the case that an older release of vSphere supports a new processor but not the
corresponding new EVC baseline that exposes the maximum guest-visible features of that
processor. A newer vSphere release usually supports both the processor and the new EVC
baseline. This is because the older release can only support those features of the new
processor that are in common with older processors. Therefore, support of an EVC baseline is
not identical to the support of the corresponding processor. Tables 2.1 and 2.2 indicate the
earliest vSphere release that supports each EVC baseline.
As an example, consider the Intel Sandy Bridge Generation EVC baseline and the Intel
Xeon e5-2400 (a processor based on the Intel Sandy Bridge architecture). The processor
is supported by both vSphere 4.1 Update 2 (and later) and vSphere 5.0 (and later). But
because vSphere 4.1 update 2 lacks support for advanced Sandy Bridge features such as
AVX, the Intel Sandy Bridge Generation EVC baseline is only supported starting with the
vSphere 5.0 release. However, vSphere 4.1 Update 2 does support lower level EVC baselines
on the Intel Xeon e5-2400, such as Intel Westmere Generation and Intel Merom
Generation.
Not all members of a given processor generation can support the same maximum EVC
baseline. Either because of BIOS configuration or branding decisions made by OEM or CPU
vendors, some members of that generation may lack a feature required to participate at the
maximum EVC baseline. For example, some Intel Xeon i3/i5 Clarkdale processors (based
on the Intel Westmere processor architecture) do not have AESNI capability, which is
99 | P a g e
required for the Intel Westmere Generation EVC baseline. Therefore, these processors
cannot support that EVC baseline and must use lower levels of EVC baselines. Another
example is where AESNI has been disabled by BIOS in an Intel Xeon 5600 processor (also
based on the Intel Westmere processor architecture); as a result, this processor also
cannot support the Intel Westmere EVC baseline and must use lower levels of EVC
baselines.
The VMware Compatibility Guide always correctly lists the maximum EVC baseline for a
processor assuming that no BIOS disablement of features has been enforced. Since disabling
of features by BIOS is OEM and customer specific, the guide cannot address these cases.
Table 2.1: AMD EVC Baselines supported in vCenter Server releases
EVC Cluster Baseline
AMD
AMD
AMD
AMD
vCenter Server
Opteron
Opteron Opteron Opteron
Release
Gen. 3 (no
Gen. 1
Gen. 2
Gen. 3
3DNow!)
AMD
AMD Opteron
Opteron "Piledriver"
Gen. 4
Gen.
VirtualCenter 2.5
U2 and later
updates
Yes
No
No
No
No
No
vCenter Server
4.0
Yes
Yes
Yes
No
No
No
vCenter Server
4.0 U1 and later
updates
Yes
Yes
Yes
No
No
No
vCenter Server
4.1
Yes
Yes
Yes
Yes
No
No
vCenter Server
5.0
Yes
Yes
Yes
Yes
Yes
No
vCenter Server
5.1
Yes
Yes
Yes
Yes
Yes
Yes
vCenter Server
5.5
Yes
Yes
Yes
Yes
Yes
Yes
Intel "Ivy
Bridge"
Generation
VirtualCenter
2.5 U2 and
later updates
Yes
No
No
No
No
No
vCenter Server
4.0
Yes
Yes
Yes
No
No
No
vCenter Server
4.0 U1 and
later updates
Yes
Yes
Yes
Yes
No
No
vCenter Server
4.1
Yes
Yes
Yes
Yes
No
No
vCenter Server
5.0
Yes
Yes
Yes
Yes
Yes
No
100 | P a g e
vCenter Server
5.1
Yes
Yes
Yes
Yes
Yes
Yes
vCenter Server
5.5
Yes
Yes
Yes
Yes
Yes
Yes
Full and incremental file backup of virtual machines for recovery of individual files
and directories
101 | P a g e
102 | P a g e
iSCSI
Adapter
Independent Implements its own networking and iSCSI configuration and management
Hardware
interfaces.
iSCSI Adapter
An example of an independent hardware iSCSI adapter is a card that either
presents only iSCSI offload functionality or iSCSI offload functionality and
standard NIC functionality. The iSCSI offload functionality has independent
configuration management that assigns the IP, MAC, and other parameters
used for the iSCSI sessions. An example of a independent adapter is the
QLogic QLA4052 adapter.
Hardware iSCSI adapters might need to be licensed. Otherwise, they will not appear in the
vSphere Client or vSphere CLI. Contact your vendor for licensing information.
103 | P a g e