vExpert 2014

VMware vExpert 2014

I am very pleased to announce that I have been awarded the vExpert award from VMware for 2014.

The vExpert award is given to individuals who make a considerable effort within the community to share their expertise with others.

A vExpert is someone who is not necessarily a technical expert or even an expert in all things VMware, but rather someone who goes above and beyond their day job in the community to develop a platform of influence both publicly in books, blogs, online forums, and VMware User Groups; and privately inside customers and VMware partners.

I am proud that this blog, together with other efforts  such as Experts Exchange and Cloud Cred and StratoGen blog have been considered as a valuable part of the virtualisation community.

Congratulations to all my fellow vExperts, an amazing 754 people were awarded it this year I am honoured to be counted amongst that list.

VMware How To: Shut Down vCloud Director Cell Cleanly

To cleanly shut down a vCloud Director cell run the following commands.

Display the current state of the cell to view any active jobs.
#  /opt/vmware/vcloud-director/bin/cell-management-tool -u <USERNAME> cell --status

Then Quiesce the active jobs.
#  /opt/vmware/vcloud-director/bin/cell-management-tool -u <USERNAME> cell --quiesce true

Check the cell isn’t processing any active jobs.
#  /opt/vmware/vcloud-director/bin/cell-management-tool -u <USERNAME> cell --status

Shut the cell down to prevent any other jobs from becoming active on the cell.
#  /opt/vmware/vcloud-director/bin/cell-management-tool -u <USERNAME> cell --shutdown

Now run the status command again to check that job count says zero.
#  /opt/vmware/vcloud-director/bin/cell-management-tool -u <USERNAME> cell --status

vCD cell status

Then stop the vCD service.
# service vmware-vcd stop

When you want to bring the host back up again start the service.
# service vmware-vcd start

A typical service start takes around 2-5 minutes.  You can monitor the progess of the restart by tailing the cell.log file.
# tail -f /opt/vmware/vcloud-director/logs/cell.log

cell starting

Once it say’s 100%, it is done.

vCHS - VMware vCloud Hybrid Service Technical Overview

The vCloud Hybrid Service is a new public cloud offering from VMware, currently only available in the United States but coming to Europe early in 2014.
The vCloud Hybrid Service is composed of a vCloud Director and vSphere backed environment with a bespoke, customised web portal front-end that handles provisioning and basic configuration options, such as deploying workloads and changing virtual machine network settings.
Those familiar with vCloud Director will find it pretty easy to pickup.

Service Offerings

The service consists of  two core models, a dedicated platform called a Dedicated Cloud and a shared platform called a Virtual Private Cloud.
The Dedicated Cloud is a unique vCenter Server instance and dedicated vCloud Director and dedicated compute resources with shared storage and networking.    The Virtual Private Cloud is akin to existing vCloud Director allocation pool backed organisation virtual datacenters (Org vDCs).

Both offerings have the same storage of auto-tiering, SSD cache enabled arrays with appropriate reservations in place to ensure a good, reliable service.  VMware are being quite tight lipped about the actual storage being used so that they can change providers as new storage is developed, but they are assuring customers that it will always be fast, modern, high-end storage.

The two models are sized as shown in the image below:

Dedicated vs Virtual Private Cloud

All virtual machines within a Dedicated Cloud are contained in a reservation pool, 100% of the pool allocation is yours to configure as you see fit.  The hosts in the pool are dedicated to you.  This means that you can control the amount of over provisioning being performed by adjusting virtual machine reservations and limits.

With the virtual private cloud (VPC) you have a vCD allocation pool with a 100% reservation on memory and a 50% reservation on CPU (shown here as 5GHz burst to 10Ghz).  What this means is that if you want enough RAM or CPU for 5 VMs then you pay for that amount of compute and you cannot add additional VMs without increasing your allocated pool size.  (Parting with some hard earned cash!)   By having a 100% reservation on RAM and a 50% reservation on CPU this should limit the ‘noisy neighbour’ issues sometimes experienced in over-provisioned virtual environments.

Note the figures shown above are minimums for the service offering, you can have any number greater than this in your cloud as long as you purchase these minimums.  It is easy to add an additional GB of RAM as and when you like.

One of the key differences between these two offerings is the ability to provision additional Edge Gateways with the Dedicated Cloud.
In vCloud Director, Edge Gateways can only be provisioned by a vCloud System Administrator and not a Organisational level administrator.  (If you use a public cloud provider then you are at most an Org level administrator.)  However with vCHS you can do this.  This extra level of control gives you much greater flexibility when configuring your cloud platform.

The two suites are available in the following minimum subscription terms with pricing being lower for longer term contracts.

Dedicated Cloud

1 month
12 month
24 month
36 month

Virtual Private Cloud

1 month
3 month
12 month

Those familiar with the vCloud Suite may be aware of the vCloud Automation Center (vCAC) and may make the reasonable assumption that this is the engine behind the vCloud Hybrid Service (vCHS), but that is not the case.  The vCHS engine is a separate piece of software that is a VMware only product, that is it is used exclusively by VMware and not available to install.  It uses VMware ‘secret sauce’, some code unique to vCHS that VMware are keeping close to their chest.

Using The vCloud Hybrid Service

The vCHS web portal itself is a fairly user friendly interface with ‘badges’ showing the name of Organisation Virtual Datacenters (Org vDC’s) that you can click on to manage the sub-components of the Org vDC such as VM configuration and network settings.

vCHS Portal

vCHS talks to vCloud Director on the back-end to perform administration tasks.
The way that vCHS talks to vCD is through the vCloud API and so the API is also available to you as a consumer of the cloud to allow you to provision and manage workloads as you see fit.  The API will connect directly to the Organisation Virtual Datacenters (Org vDCs) to manage your workloads.

The virtual machines tab allows you to perform basic configuration of virtual machines such as powering on/off, re-configuring RAM and using VMware’s backup offering, register it for backup.

Virtual Machines Tab

The backup is a daily backup that offers full VM restoration and not individual file level restore so you will still need your application level backup applications for that level of granularity and for application consistency using VSS etc.

The last tab is the gateway tab.  This is where you configure the Edge Gateways in your organisation vDC.


The example here has two Org vDCs, Production and Development.  If you want to add an additional Gateway, and you have a Dedicated Cloud, you can click the Add a Gateway link and set it up. You will need to  make sure you have an additional public IP address available (speak to your reseller.)

Within the Org vDC in vCHS you can manage and view additional components such as allocation of resources, virtual machines, users configured and networks.

On most pages within the web interface is a link that can take you directly to vCloud Director in order to perform additional configuration that cannot be achieved in the vCHS portal.  In it current infant stage there is a lot of settings that do need to be configured directly in vCloud Director as the functionality is not there yet in vCHS however if you give it time everything you can do in vCD will soon be available to vCHS and infact the idea is that eventually everything will have to be done in the vCHS portal and the vCD portal will no longer be available.

Every vCHS deployment is initally setup with an externally routed network connected to the Edge Gateway and an isolated network where any VMs connected to that network can only communicate with each other.

I like the way that the networks are shown ‘at a glance’ in vCHS as looking at the information involves quite a few clicks in vCD.

vCHS Networks

Also within the Org vDC is the links to purchase more resources as well as the vCloud Director URL that you need to connect to to manage the environment at a more granular level, for example if you want to configure NAT, static routes or firewall rules.   This same address is used when connecting the vCloud API.  For further information on using the vCHS vCloud API I suggest reading Massimo’s blog post on the subject.

VMware are also providing a market place for virtual machines packaged up as virtual appliances through their Solutions Exchange.  They have a section dedicated to the Hybrid Cloud service where you can download an OS and set it up with some support.  Certain appliances are billed according to the license requirements of the OS and the level of support offered by VMware.

The Hybrid Bit

Migration Of Workloads

You can connect your vSphere or private VMware vCloud to the vCHS platform, after all it is a hybrid service.  You achieve this using the vCloud Connector in the same way you do with other service providers by installing a vCloud Connector Server and a node in your site and a node in the vCHS service.  Then register this in vCenter to push the VMs between your site and the vCHS datacenters.
VMware have also added an option to move large workloads by sending out a 12TB drive to you to copy your VMs onto and ship back and then VMware will import and register it in vCHS for you.

Persistent Connectivity

You can also connect your site with an IPSec VPN to allow connectivity between your machines as well as doing a ‘datacenter extension’ stretched deploy setup, which means taking your existing IP addresses with you.  The example below shows that the green networks are on the same IP range and can communicate over the VPN via a double NAT on each end of the VPN connection.

Datacenter Extension
Chris Colotti has written several post on how to do this, I suggest you have a look through if this is something you would want to do.  If you decide to take your IPs with you bear in mind that all traffic for the VM in the vCHS platform will be routing through the VPN back to your site, including any internet requests.

All in all the vCloud Hybrid Service looks to be a very interesting offering from VMware.

PowerCLI: Remap vCD Network When Duplicate Exists

Ok so it’s a bit of a long title.

When you import a virtual machine into vCloud Director from the underlying vCenter Server you can remap the network as mentioned in my previous blog post Import VMs from vCenter to vCloud Director using PowerCLI.
This works great up to the point when you have more than one vApp network with the same name, something that grows increasingly likely the bigger your vCD environment gets.

There is a way around this issue.  You can get PowerCLI to query the vApp that you are trying to connect the imported VM to and use the output of the vApp network name in the remap command.  Let me show you.

First connect PowerCLI to the vCD cell and the underlying vCenter.

$ciserver =
$viserver =

Connect-VIServer -Server $viserver
Connect-CIServer -Server $ciserver

Now specify the vApp name

$civapp = vApp_Name

Then specify the vApp network name so you can call it when you get the VM to do the remap.

$cinetwork = get-civapp $civapp | Get-CIVAppNetwork Network_Name

Finally run the remap command

Get-CINetworkAdapter | Set-CINetworkAdapter -vappnetwork $cinetwork -IPaddressAllocationMode Pool -Connected $True

To simplify this further you can run this all as part of a script to query all the VMs that are added to the vApp and remap all their networks to use the same one.  To use this just copy and paste it into a text editor and save it as a .PS1 file.

$ciserver = ''
$viserver = ''

Connect-VIServer -Server $viserver
Connect-CIServer -Server $ciserver

<$civapp = 'vApp_Name'
$cinetwork = get-civapp $civapp | Get-CIVAppNetwork 'Network_Name'
foreach ($civm in $civms) {
$civm | Get-CINetworkAdapter | Set-CINetworkAdapter -vappnetwork $cinetwork -IPaddressAllocationMode Pool -Connected $True

You can add an extra layer to this by specifying a variable to query Organisation Virtual Datacenter first then add this to the $cinetwork variable as follows.

$orgvdc = OrgvDC_Name
$cinetwork = get-orgvdc $orgvdc | get-civapp $civapp | Get-CIVAppNetwork Network_Name

Import VMs from vCenter to vCloud Director using PowerCLI

vCloud Director allows you to import virtual machines from the underlying vCenter Server which is perfect should you build a virtual machine in vCenter first, or perhaps have issues uploading VMs to vCloud Director such as when using the import OVF to catalog option.

This import is very simple to perform by selecting the ‘Import from vCenter’ icon and choosing whether to copy VM or move VM.  As the name suggests a copy will clone the virtual machine in vCenter, create a duplicate copy then migrate it into the relevant folder and resource pool.  A move will move it to the new locations.

Import from vCenter to vCD

This works well if you are importing just one or two VMs but if you want to bulk import a load of VMs you are better off scripting it such as using PowerCLI.

PowerCLI has some useful commands you can use to import the VMs.
If you just want a quick script that you can use to import the VMs you can use the ones below.

Please note to remap the network to the relevant vCloud Director backed network you will need to use the second script as well.  I find that running them one after each other doesn’t allow the import task to complete as vCloud Director sees that there is currently a change occuring to the vApp and won’t allow the remap to work.
You can run the scripts one after the other and it will work fine, not forgetting to replace the variables in ‘quotes’ with the names that are relevant in your environment.
You can highlight the code by double-clicking it to copy it  Alternatively read on below and I will show you how to run them together.

Script to import VMs

$ciserver = ''
$viserver = ''

Connect-VIServer -Server $viserver
Connect-CIServer -Server $ciserver

$orgvdc = 'OrgvDC_Name'
$vms = get-folder 'VM_Folder_Name' | get-vm
$civapp = 'vApp_Name'
$civms = get-civapp $civapp | get-civm

foreach ($vm in $vms) { 
	 get-civapp $civapp | Import-CIVApp $vm -NoCopy:$True -RunAsync #-Confirm $false 

Script to remap network.  (Note it is assumed that you will run these scripts one after the other after waiting for the first one to finish.)

$cinetwork = 'Org_VDC_Network_Name'

foreach ($civm in $civms) {
	$civm | Get-CINetworkAdapter | Set-CINetworkAdapter -vappnetwork $cinetwork -IPaddressAllocationMode Pool -Connected $True 

Disconnect-VIServer -Server $viserver -Confirm:$false
Disconnect-CIServer -Server $ciserver -Confirm:$false

An important note for this script is the -RunAsync command.  This will tell the script to start processing the next VM in the folder without waiting for the last one to complete.  Without it you are forced to wait for each import to complete.

This script above will tell the VM to connect to a vApp network and assign an IP address from the static IP pool.

Another options is that  you can run them together by making the script temporarily wait in between. There are a few different ways to do this but the simplest way I have found is using a sleep command.

Start-Sleep -s 30

The above command will make the script ‘sleep’ for 30 seconds, adjust it to suit your needs.  For this import I suggest starting with 30 seconds as that should give the vApp time to finish the import command of the last VM.

To put this all into one script use the following.

$ciserver = ''
$viserver = ''

Connect-VIServer -Server $viserver
Connect-CIServer -Server $ciserver

$orgvdc = 'OrgvDC_Name'
$vms = get-folder 'VM_Folder_Name' | get-vm
$civapp = 'vApp_Name'
$civms = get-civapp $civapp | get-civm
$cinetwork = 'Org_VDC_Network_Name'

foreach ($vm in $vms) { 
	 get-civapp $civapp | Import-CIVApp $vm -NoCopy:$True -RunAsync #-Confirm $false 

Start-Sleep -s 30

foreach ($civm in $civms) {
	$civm | Get-CINetworkAdapter | Set-CINetworkAdapter -vappnetwork $cinetwork -IPaddressAllocationMode Pool -Connected $True 

Disconnect-VIServer -Server $viserver -Confirm:$false
Disconnect-CIServer -Server $ciserver -Confirm:$false

VMware Virtual Machine Memory Guide

Memory Virtualisation Basics

When an operating system is installed directly onto the physical hardware in a non-virtualised environment, the operating system has direct access to the memory installed in the system and simple memory requests, or pages always have a 1:1 mapping to the physical RAM, meaning that if 4GB of RAM is installed, and the operating system supports that much memory then the full 4GB is available to the operating system as soon as it is requested.  Most operating systems will support the full 4GB, especially if they are 64-bit operating systems.
When an application within the operating system makes a memory page request, it requests the page from the operating system, which in turn passes a free page to the application, so it can perform its tasks.  This is performed seamlessly.

The hypervisor adds an extra level of indirection.  The hypervisor maps the guest physical memory addresses to the machine, or host physical memory addresses.  This gives the hypervisor memory management abilities that are transparent to the guest operating system. It is these memory management techniques that allow for memory overcommitment.

To get a good understanding of memory behaviour within a virtualised environment, let’s focus on three key areas.

  • Memory Terminology
  • Memory Management
  • Memory Reclamation

Memory Terminology

With an operating system running in a virtual machine the memory requested by the application is called the virtual memory, the memory installed in the virtual machine operating system is called the physical memory and the hypervisor adds an additional layer called the machine memory.

To help define the interoperability of memory between the physical RAM installed in the server and the individual applications within each virtual machine, the three key memory levels are also described as follows.
Host physical memory Refers to the memory that is visible to the hypervisor as available on the system.  (Machine memory)
Guest physical memory – Refers to the memory that is visible to the guest operating system running in the virtual machine.  Guest physical memory is backed by host physical memory, which means the hypervisor provides a mapping from the guest to the host memory.
Guest virtual memory – Refers to a continuous virtual address space presented by the guest operating system to applications. It is the memory that is visible to the applications running inside the virtual machine.
To help understand how these layers inter-operate, look at the following diagram.

Host Physical to Guest Physical to Guest Virtual Memory

The Virtual memory creates a uniform memory address space for operating systems that maps application virtual memory addresses to physical memory addresses.  This gives the operating system memory management abilities that are transparent to the application.

Memory Management

Hypervisor Memory Management

Memory pages within a virtualised environment have to negotiate an additional layer, the hypervisor.  The hypervisor creates a contiguous addressable memory space for a virtual machine. This memory space has the same basic properties as the virtual address space that the guest operating system presents to the applications running on it. This allows the hypervisor to run multiple virtual machines simultaneously while protecting the memory of each virtual machine from being accessed by others.

The virtual machine monitor (VMM) controls each virtual machine’s memory allocation. The VMM does this using software-based memory virtualization.
The VMM for each virtual machine maintains a memory mapping from memory pages contained inside the the guest operating system. The mapping is from the guest physical pages to the host physical pages. The host physical memory pages are also called the machine pages.
This memory mapping technique is maintained through a Physical Memory Mapping Data (PMAP) structure.

Each virtual machine sees its memory as a contiguous addressable memory space. The underlying physical machine memory however may not be contiguous. This is because it may be running more than one virtual machine at any one time and it is sharing the memory out amongst the VMs.
The VMM sits between the guest physical memory and the Memory Management Unit (MMU) on the CPU so that the actual CPU cache on the processor is not updated directly by the virtual machine.
The hypervisor maintains the virtual-to-machine page mappings in a shadow page table. The shadow page table is responsible for maintaining consistency between the PMAP and the guest virtual to host physical machine mappings.

The shadow page table is also maintained by the virtual machine monitor. (VMM)

Shadow Page Tables and PMAP

Each processor in the physical machine uses the Translation Lookaside Buffer (TLB) on the processor cache for the direct virtual-to-physical machine mapping updates. These updates comes from the shadow page tables.

Some CPU’s support hardware-assisted memory virtualisation. AMD SVM-V and Intel Xeon 5500 series CPU’s support it. These CPU’s have two paging tables, one for the virtual-to-physical translations and one for the physical-to-machine translations.

Hardware assisted memory virtualisation eliminates the overhead associated with software virtualisation, namely the overhead associated with keeping shadow page tables synchronised with guest page tables, as it uses two layers of page tables in hardware that are synchronized using the processor hardware.

One thing to note with hardware assisted memory virtualisation is that the TLB miss latency is significantly higher. As a result workloads with a small amount of  page table activity will not have a detrimental effect using software virtualisation, whereas workloads with a lot of page table activity are likely to benefit from hardware assistance.

Application Memory Management

An application starts with no memory, it allocates memory through a syscall to the operating system. The application frees up memory voluntarily when not in use through an explicit memory allocation interface with the operating system.

Operating System Memory Management

As far as the operating system is concerned it owns all the physical memory allocated to it.  That is because it has no memory allocation interface with the hardware, only with the virtual machine monitor. It does not explicitly allocate or free physical memory; it defines the in use and available memory by maintaining a free list and an allocated list of physical memory.  The memory is either free or allocated depending on which list it resides on.  This memory allocation list exists in memory.

Virtual Machine Memory Allocation

When a virtual machine starts up it has no physical memory allocated to it.  As it starts up it ‘touches’ memory space, as it does this the hypervisor allocates it physical memory.  With some operating systems this can actually mean the entire amount of memory allocated to the virtual machine is called into active memory as soon as the operating system starts, as is typically seen with Microsoft Windows operating systems.

Memory Reclamation

Transparent Page Sharing (TPS)

Transparent Page Sharing is on-the-fly de-duplication of memory pages by looking for identical copies of memory and deleting all but one copy, giving the impression that more memory is available to the virtual machine.  This is performed when the host is idle.
Hosts that are configured with AMD-RVI or Intel-EPT hardware assist CPUs are able to take advantage of large memory pages where the host will back guest physical memory pages with host physical memory pages in 2MB pages rather than the standard 4KB pages where lage pages are not used. This is because there will be less TLB misses and so will achieve better performance. There is a trade off though as large memory pages will not be shared as the chance of finding a 2MB memory pages that are identical are low and the overhead associated with doing the bit-by-bit comparison of the 2MB pages is greater than the 4KB page.

Large memory pages may still be broken down into smaller 4KB pages during times of contention as the host will generate 4KB hashes for the 2MB large memory pages so that when the host is swapping memory it can use these hashes to share the memory.

It is possible to configure advanced settings on the host to set the time to scan the virtual machines memory, Mem.ShareScanTime and the maximum number of scanned pages per second in the host, Mem.ShareScanGHz and the maximum number of per-virtual machine scanned pages using Mem.ShareRateMax.

Use resxtop and esxtop to view PSHARE field to monitor current transparent page sharing activity.  This is available in memory view.  See below.


You can disable TPS on a particular VM by configuring the advanced setting Sched.mem.Pshare.enable=false.


An ESXi host has no idea how much memory is allocated within a virtual machine, only what the virtual machine has requested. As more virtual machines are added to a host there are subsequently more memory requests and the amount of free memory may become low. This is where ballooning is used.
Provided VMware tools is installed, the ESXi host will load the balloon driver (vmmemctl) inside the guest operating system as a custom device driver.
The balloon driver communicates directly with the hypervisor on the host and during times of contention creates memory pressure inside the virtual machine. This tells the virtual machine that memory is contended, and ‘inflates like a balloon’ by requesting memory from the guest and this memory is then ‘pinned’ by the hypervisor and mapped into host physical memory as free memory that is available for other guest operating systems to use.  The memory that is pinned within the guest OS is configured so that the guest OS will not swap out the pinned pages to disk.  If the guest OS with the pinned pages requests access to the pinned memory pages it will be allocated additional memory by the host as per a normal memory request. Only when the host ‘deflates’ the balloon driver will the guest physical memory pages become available to the guest again.

Ballooning is a good thing to have as it allows the guest operating systems to handle how much of it’s memory to free up rather than the hypervisor which doesn’t understand when a guest OS is finished accessing memory.

Take a look at the figure below and you can see how ballooning works. The VM  has one memory page in use by an application and two idle pages that have been pinned by the hypervisor so that it can be claimed by another operating system.

Balloon Driver In Action


The memory transfer between guest physical memory and the host swap device is referred to as hypervisor swapping and is driven by the hypervisor.
The memory transfer between the guest physical memory and the guest swap device is referred to as guest-level paging and is driven by the guest operating system.
Host level swapping occurs when the host is under memory contention. It is transparent to the virtual machine.
The hypervisor will swap random pieces of memory without a concern as to what that piece of memory is doing at that time.  It can potentially swap out currently active memory. When swapping, all segments belonging to a process are moved to the swap area. The process is chosen if it’s not expected to be run for a while.   Before the process can run again it must be copied back into host physical memory.


Memory compression works by stepping in and acting as a last line of defence against host swapping by compressing the memory pages that would normally be swapped out to disk and compressing them onto the local cache on the host memory.  This means that rather than sending memory pages out to the comparatively slow disk they are instead kept compressed on the local memory within the host which is significantly faster.

Only memory pages that are being sent for swapping and can be compressed by a factor of 50% or higher are compressed, otherwise they are written out to host-level swap file.
Because of this memory compression will only occur when the host is under contention and performing host-level swapping.

Memory compression is sized at 10% of the configured allocated memory on a virtual machine by default to prevent excessive memory pressure on the host as compression size needs to be accounted for on every VM.
You can configure a different value than 10% with the following advanced setting on the virtual machine.  Mem.MemZipMaxPct.

When the compression cache is full the first memory compressed page will be decompressed and swapped out to the hypervisor level cache.

VMware Virtual Machine Snapshots Hidden

I thought I would just post a quick article to say I have written a guide on how to deal with hidden VMware snapshots.  It’s quite technical and hopefully should allow you to be able to resolve any hidden snapshot issues you may come across.

When I talk about hidden snapshots I am talking about when the .vmx file of the virtual machine is referencing a snapshot disk, that is one that has a <VM_Name>-00001.vmdk disk associated with it but as far as the snapshot manager is concerned there is no snapshots present as show below.

No Snapshots Present

For more information read the full article at the following link.




Update ESXi iSCSI Network Drivers - A Quick Guide

If you want to upgrade the network drivers within ESXi 5 follow these simple steps.

First off you need the name of the vmnic that you want to upgrade. Run this to display a list of installed network adapters currently in use.
# esxcfg-nics -l

This will return something similar to below.

You then need to discover which drivers you have installed.  You can find this out by running this command
# ethtool -i nameofnic

So using my above screen output as an example
# ethtool -i vmnic4 returns this result





As you can see my driver is the be2net version
Now go and download the most recent, supported driver version.  The VMware website is a good place to start, however always check with your hardware manufacturer for their lastest supported version.

Once the driver is downloaded you can either use esxcli (esxupdate/vihostupdate is available for pre vSphere 5 only) or VMware Update Manager.
I would strongly recommend using Update Manager.  It makes patch management so simple.

To add in drivers open Update Manager in admin view by clicking Home> Update Manager> Patch Repository> Import Patches and follow the install wizard.

If however you don’t have update manager installed in your environment (why not?) or you are eager to learn how to install drivers from the command line the you can use the following.

  1. Upload the driver to a datastore accessible to the ESXi host, either using the datastore browser in the vSphere Client or using something like WinSCP to upload to the /vmfs/volumes/datastore (replacing datastore with the appropriate name of your datastore)
  2. Enter maintenance mode
  3. run esxcli software vib install –d /vmfs/volumes/datastore/
  4. Once complete reboot the host and exit maintenance mode

If you would like to know which drivers you have installed already you can run
# software vib list – Lists the installed VIB packages
# software vib get – Displays detailed information about one or more installed VIBs.  You can also use --vibname to just display information about the specified vib.

Additional Information
esxcli software Commands
ESXi update guide
Upgrade an ESXi 4.0 Host to 4.1 with the vihostupdate Utility
ESXi Upgrade Guide Using Putty
vSphere Update Manager Documentation

vSphere 5 - What's New (and relevant for the VCP 5) (Part 3)

This is part 3 of my guide, you can read part 1 here and part 2 here


High Availability (HA)
New features with HA include heartbeat Datastores and support for IPv6.  A heartbeat datastore is great because it helps to prevent situations where the management network drops out but the virtual machines continue running on the virtual machine network, and without the management network in place the HA cluster thinks that the ESX host is isolated and starts the HA recovery options.  Previously the only way to prevent this from happening is the configure a redundant management network on the storage network.

Datastore heartbeating

As you can see in the image, you can specify the preferred shared datastore to use or to choose any datastore.  Just make sure you choose a reliable one!!

A new and improved master/slave cluster model has been created, rather than the multiple primary/secondary host model.
The server with the highest Managed Object ID (moid) is chosen as the master HA server.

Master election occurs when either:
vSphere HA is enabled
The master host encounters a system failure
The communication between master and slave hosts fails

The Master host monitors the state of slave hosts.  It also monitors the power state of all protected VMs and manages the list of hosts in the cluster.  It acts as the vCenter Server management interface for the HA cluster.
The slave members monitor the health of VMs running on them and they forward state changes in virtual machines to the master host.  They also participate in electing a new master host.

Additional HA improvements include management network partition support, improved host isolation response, improved vSphere HA admission control policy and enhanced vSphere HA security.

Enhanced HA security features
Auto port opening and closing on the firewalls
Protection of configuration files using file system permissions
Detailed logging, by default sent to the syslog server (once configured during initial setup)
Secure vSphere HA logins
Secure communication
Host SSL certificate verification required
HA now uses port 8182 for all network traffic

Fault Tolerence (FT)
FT improvements include additional CPU support, including Intel Westmere-EX, Sandy Bridge (SNB-DT, SNB-EP and SNB-EN) & AMD Bulldozer.

Additional VM Guest OS support include:
Windows 7, SP1 (All Versions)
Windows Vista SP3 (All Versions)
Windows 8 (32 bit or 64 bit)
Red Hat Enterprise Linux (RHEL) 4.9, 5.5, 6.0, 6.1
Suse Linux Enterprise Server (SLES) 10 SP4 and 11 SP1

Installation options have been increased to include a couple of new options to compliment the existing ones.

ESXi installation options

  • Interactive install
  • Scripted install
  • Auto Deploy

This is the usual install we are all familiar with, it has a couple of changes, allowing you to chose to upgrade an existing install of ESX/ESXi and keep the datastores as well as a prompt during install to create a password for the root account.  A fresh install scratch partition size is 4GB.

The source for a scripted install can be FTP, HTTP or HTTPs NFS, USB or CD Rom.

A few commands have been deprecated:
Autopart, esxlocation, serialnum and vmserial num

A few more commands are no longer supported:
auth/authconfig, bootloader, firewall, firewallport, timezone, virtualdisk, zerombr, %packages

Auto Deploy
Auto Deploy is a cloud computing enabler, and being a cloud ready platform is what vSphere 5 is all about.  It allows for fast provisioning of hosts, which can then be customised with host profiles to set all the virtual infrastructure settings.
Auto Deploy uses rule sets defined by rule engines to determine what images are used based on the specific hardware configuration of each host.  Hardware vendors will be able to provide specific drivers for their hardware so all you need to do is download the appropriate boot images.

The rules set is made up of the following components:


– active rule set
– working rule set

Auto Deploy is a Windows based application which is available from the vSphere 5 vCenter Server installer executable.
Auto Deploy uses PXE boot to install ESXi onto the required hosts.

Configure the following the use Auto Deploy:
Setup a DHCP server
– Option 66 – FQDN or IP address of the TFTP server
– Option 67 – undionly.kpe.vmw-hardwired
Setup a TFTP server  (Can use winagents TFTP)
Identify an image profile to use (Check public image depot) Must include a base ESX VIB
Specify the deployment rules

Boot Process Overview
Stateless host PXE boots and is assigned an IP address by the DHCP server
DHCP uses option 66 to send the host to the TFTP server
The stateless host loads the gpxe configuration file as specified in option 67 (undionly.kpxe.vmw-hardwired)
The gpxe configuration file instructs the host to make make a HTTP boot request to the Auto Deploy server
Auto Deploy queries the rules engine for information about the host
An image profile and a host profile is attached to the host based on a rule set
ESXi is installed into the hosts memory and it is loaded into vCenter
vCenter maintains the image profile and host profile for each host in its database


You can specify folders and datastores to use, otherwise Auto Deploy will place the host in the first datastore.

Setup Deployment Rules
You can use PowerCLI to create deployment rules.  Only build images with one software developer at a time.
Open PowerCLI and issue the following commands

  1. Add-PSSnapin VMware.DeployAutomation
  2. Add-PSSnapin VMware.ImageBuilder
  3. Connect-VIServer ‘vcenter_server_name’
  4. Add-ESXSoftwareDepot “location_of_zip”
  5. Get-ESXImageProfile (To confirm it is loaded)
  6. New-DeployRule -Name ‘name_of_rule’  -Item “ESXImageProfile”, “name_of_location_in_vCenter” -Pattern “MAC=MAC_address_of_host”  (can also use IP address, you can define this in the DHCP reservation options)
  7. Add-DeployRule -DeployRule “name_of_rule”  This adds the rule to the active set rule 

For a more detailed step-by-step guide check out Duncan Eppings guide


Ok, so now you know what’s new in vSphere 5  it is time to build this yourself so you can see the changes first hand, then once you have had an opportunity to get to grips with it, go and book the VCP 5 exam.  Have a read of the VCP5 Exam Blueprint to ensure you are happy with everything covered and I am sure you will pass the exam no problems.  As I said at the beginning build a lab and practice, practice and practice.  Personally I would recommend staying away from multiple choice practice tests, if you want to know what the exam will be like then take the mock exam on the VMware website, but stay away from the others, quite often the answer given on these sites are wrong!!
Having said that Simon Long does have some good questions related to vSphere 5 configuration maximums which are worth checking out after you have had lots of practice in your home lab just to make sure you are able to answer any questions related to configuration maximums etc.

Best of luck with your VCP, if you found this article useful please leave a comment below.

vSphere 5 - What's New (and relevant for the VCP 5) (Part 2)

This is part 2 of my guide, you can read part 1 here


Quite a few improvements have been made to storage with vSphere 5.

Storage Profiles
Profile driven storage allows SLAs to be set to certain storage types,  For example this together with Storage DRS (explained below) can be used to automatically keep storage tiered so that high I/O VMs can remain on SSD drives.
Profile driven storage uses VASA, the vSphere Aware Storage API and vCenter to continually monitor the storage to ensure that the SLA’s for the storage profile are being met.

You can configure Storage Profiles from the Home screen in vCenter Server.

vSphere Storage Appliance
The vSphere Storage Appliance (VSA) is designed as a way of providing the cool features of vSphere, such as vMotion, HA and DRS without the requirement for an expensive SAN array.  It is made up of two or three ‘greenfield’ ESXi servers which act as dedicated storage devices.  It is not possible to have any virtual machines running on these servers, but the costs of such servers can be much cheaper than a dedicated SAN.

The VSA creates a VSA Cluster.  It uses shared datastores for all hosts in the cluster and stores a replica of each shared datastore on the other hosts in the cluster.  It presents this local storage as mirrored NFS datastores, where-by the mirror is the copy of the NFS datastore on one of the other ESXi host in the cluster.

                    Graphic courtesy of

The vSphere Storage Appliance is managed by VSA manager through the vCenter Server.  The VSA manager allows replacement of a failed VSA cluster member and recovery of an existing VSA cluster.  To prevent split-brain scenarios a majority node cluster is required.
With a three-node cluster a majority of two nodes is required and with a two-node cluster a VSA Cluster Service acts as a tertiary node. The VSA Cluster Service run on vCenter Server.

The VSA cluster will enable vMotion and HA on the cluster for the virtual machines that are running on the VSA cluster.

It is an important design consideration that the storage requirements for the hosts in a VSA cluster will at least double to an account for the volume replicas that are created.

No VMs on the ESXi hosts participating in the VSA cluster
No virtual vCenter Server anywhere on the cluster
One datastore on each server, and it must be local storage
Only a short list of servers officially supported, however it will probably work with most
ESXi hosts must be the same hardware configuration
Minimum 6GB RAM per host
RAID controller that supports RAID 10 per host

VSA network traffic is spilt into front end and back end traffic

Front End
: enables communication between
Each VSA cluster member and VSA manager
ESXi and NFS volumes
Each VSA cluster member and the VSA cluster service

Back End: enables communication between
Replication between NFS volume and its replica that resides on another host
Cluster communication between all VSA cluster members
vMotion and Storage vMotion traffic between the hosts

Further information is available in the vSphere Storage Appliance Technical Whitepaper & Evaluation Guide

VMFS5 uses the GUID partition table (GPT) format partitioning table rather than MBR partitioning table, this allows for much larger partition sizes.  The maximum volume size is now 64TB.   In fact a single extent can be up to 64TB and  RDMs can be greater than 2TB, up to 64TB.
If upgrading to ESXi, it keeps the existing MBR format and block sizes.  Once upgraded go to configuration tab in datastores and select upgrade to VMFS5, then it will support the larger sizes, however the underlying LUN will remain in the MBR format until it is reformatted.

Using GUID, all new datastore block sizes are now 1MB, however if you do upgrade then the block sizes will remain the same for that upgraded hosts datastores.  Not that this is an issue as explained you will be able to support up to 64TB volumes.

The underlying sub-block is now 8KB rather than 64KB, these means that less space is wasted due to stranded data from small files in the larger blocks.  Once sub blocks get to 1mb they will be in 1mb block sizes.  If a file is less than 1kb size then the file is stored in the file descriptor, which is a 1kb file that sits on each storage array.

VMFS5 uses Atomic Test and Set (ATS) for all file locking, ATS is an advanced form of file locking with smaller overheads when accessing the storage metadata than what was used with VMFS3.  ATS is part of  vSphere Storage APIs for Array Integration (VAAI).  ATS was available in v4.1.

vSphere API for Array Integration (VAAI)
As mentioned VAAI includes Atomic test and set (ATS) for file locking,  as well as full copy, block zero and T10 compliance.  VAAI also adds reduced CPU overhead on the host.  Hardware acceleration for NAS has also been added.  VAAI doesn’t use ATS for file locking with NFS datastores.

The following primatives are available for VAAI NAS.
Reserve space: enables storage arrays to allocate space for a VMDK in thick format
Full file clone: enables hardware-assisted offline cloning of offline virtual disk files
Fast file clone: Allows linked-clone creation to be offloaded to the array.  Currently only supported with VMware View.

This means that NAS storage devices will now support Thin, Eager Zeroed Thick and Lazy Zeroed Thick format disks, allowing disk preallocation.

SSD enhancements
VMKernel can automatically detect, tag and enable an SSD LUN.  This is particularly useful with Storage Profiles.

Storage IO Control (SIOC) enhancements
SIOC is now supported on NFS datastores.  Behaviour of SIOC for NFS datastores (volumes) is similar to that for VMFS datastores (Volumes).  Previously SIOC was only supported on Fibre Channel and iSCSI-connected storage.
If you want to set a limit based on MBps rather than IOPS, you can convert MBps to IOPS based on the typical I/O size for that virtual machine.  For example to restrict a virtual machine with 64KB  per I/Os to 10MBps set the limit to 160 IOPS. (IOPS = (MBps throughput / KB per IO) * 1024)

Storage vMotion
svMotion now supports snapshots.  It no longer uses Change block tracking, instead it uses I/O mirroring mechanism, a single pass copy of the source to the destination disk, meaning shortened migration times.

Storage DRS
Storage DRS or SDRS is a cool new feature that as the name suggests allows for the distribution of resources on the storage LUNs.  There are a couple of options to choose from when setting, from No Automation and Fully Automated.  The automated load balancing is based on storage space utilization and disk latency.
At this time it is recommended that you configure storage DRS for all LUNs/VMs but set it to manual mode.  This is recommended by VMware to allow you to double check the recommendations made by SDRS and then either accept the recommendations or not.  The recommendations are based on disk usage and latency however I expect that soon it will prove itself a most valuable asset in cluster design, removing the need to work out how much space is required per LUN or where to place the disk intensive applications in the cluster.  It is possible to disable SDRS on a schedule, for instance when performing a backup due to the increase in load on the datastores, you don’t want SDRS to start moving virtual disks around every time latency increases due to routine backups.  SDRS needs to run for 16 hours for it to take effect.
SDRS requires Enterprise plus licensing.

A SDRS cluster is set through Inventory>Datastores and Datastore Clusters.

Further information on VMware website

Software Fibre Channel over Ethernet (FCoE)
vSphere 4 introduced support for Fibre Channel over Ethernet (FCoE) with hardware adapters, now software FCoE is supported in vSphere 5. So now it is possible to use NICs that support partial FCoE offload.  NICs with partial FCoE Offload are hardware adapters that contains network and FC functionalities on the same card.  It is also referred to as a converged network adapter (CNA)

To configure
1. Connect the VMkernel to the physical FCoE NICs installed on the host
2. activate the software FCoE adapters on the ESXi host so that the host can access the fibre channel storage

Only one VLAN is supported for software FCoE in vSphere, and you can have a maximum of four software FCoE adapters on one host.

Further information can be found in the Whats New vSphere 5 Technical Whitepaper

Coming Soon….Thin Provisioning
Ok not a new feature per-se, but improvements have been made with hardware acceleration for thin provisioning, it helps in reclaiming space and also in monitoring usage of thin provisioned arrays. It Works on VMFS3 and VMFS5 (Providing you are using ESXi 5)
With the older form of thin provisioning, problems can occur with accumulation of dead space.  VMFS5 will be able to reclaim dead space by informing the array about the datastore space freed when files are deleted or removed by svMotion.  It also monitors space usage.  This feature monitors space usage on thin-provisioned LUNs and helps administrators avoid the out-of-space conditions with built in alarms.

Continued in part 3