Archive for : November, 2011

vSphere 5 – What’s New (and relevant for the VCP 5) (Part 3)

This is part 3 of my guide, you can read part 1 here and part 2 here

Availability

High Availability (HA)
New features with HA include heartbeat Datastores and support for IPv6.  A heartbeat datastore is great because it helps to prevent situations where the management network drops out but the virtual machines continue running on the virtual machine network, and without the management network in place the HA cluster thinks that the ESX host is isolated and starts the HA recovery options.  Previously the only way to prevent this from happening is the configure a redundant management network on the storage network.

Datastore heartbeating

As you can see in the image, you can specify the preferred shared datastore to use or to choose any datastore.  Just make sure you choose a reliable one!!

A new and improved master/slave cluster model has been created, rather than the multiple primary/secondary host model.
The server with the highest Managed Object ID (moid) is chosen as the master HA server.

Master election occurs when either:
vSphere HA is enabled
The master host encounters a system failure
The communication between master and slave hosts fails

The Master host monitors the state of slave hosts.  It also monitors the power state of all protected VMs and manages the list of hosts in the cluster.  It acts as the vCenter Server management interface for the HA cluster.
The slave members monitor the health of VMs running on them and they forward state changes in virtual machines to the master host.  They also participate in electing a new master host.

Additional HA improvements include management network partition support, improved host isolation response, improved vSphere HA admission control policy and enhanced vSphere HA security.

Enhanced HA security features
Auto port opening and closing on the firewalls
Protection of configuration files using file system permissions
Detailed logging, by default sent to the syslog server (once configured during initial setup)
Secure vSphere HA logins
Secure communication
Host SSL certificate verification required
HA now uses port 8182 for all network traffic

Fault Tolerence (FT)
FT improvements include additional CPU support, including Intel Westmere-EX, Sandy Bridge (SNB-DT, SNB-EP and SNB-EN) & AMD Bulldozer.

Additional VM Guest OS support include:
Windows 7, SP1 (All Versions)
Windows Vista SP3 (All Versions)
Windows 8 (32 bit or 64 bit)
Red Hat Enterprise Linux (RHEL) 4.9, 5.5, 6.0, 6.1
Suse Linux Enterprise Server (SLES) 10 SP4 and 11 SP1

Deployment
Installation options have been increased to include a couple of new options to compliment the existing ones.

ESXi installation options

  • Interactive install
  • Scripted install
  • Auto Deploy

Interactive
This is the usual install we are all familiar with, it has a couple of changes, allowing you to chose to upgrade an existing install of ESX/ESXi and keep the datastores as well as a prompt during install to create a password for the root account.  A fresh install scratch partition size is 4GB.

Scripted
The source for a scripted install can be FTP, HTTP or HTTPs NFS, USB or CD Rom.

A few commands have been deprecated:
Autopart, esxlocation, serialnum and vmserial num

A few more commands are no longer supported:
auth/authconfig, bootloader, firewall, firewallport, timezone, virtualdisk, zerombr, %packages

Auto Deploy
Auto Deploy is a cloud computing enabler, and being a cloud ready platform is what vSphere 5 is all about.  It allows for fast provisioning of hosts, which can then be customised with host profiles to set all the virtual infrastructure settings.
Auto Deploy uses rule sets defined by rule engines to determine what images are used based on the specific hardware configuration of each host.  Hardware vendors will be able to provide specific drivers for their hardware so all you need to do is download the appropriate boot images.

The rules set is made up of the following components:

Rules

– active rule set
– working rule set

Auto Deploy is a Windows based application which is available from the vSphere 5 vCenter Server installer executable.
Auto Deploy uses PXE boot to install ESXi onto the required hosts.

Configure the following the use Auto Deploy:
Setup a DHCP server
– Option 66 – FQDN or IP address of the TFTP server
– Option 67 – undionly.kpe.vmw-hardwired
Setup a TFTP server  (Can use winagents TFTP)
Identify an image profile to use (Check public image depot) Must include a base ESX VIB
Specify the deployment rules

Boot Process Overview
Stateless host PXE boots and is assigned an IP address by the DHCP server
DHCP uses option 66 to send the host to the TFTP server
The stateless host loads the gpxe configuration file as specified in option 67 (undionly.kpxe.vmw-hardwired)
The gpxe configuration file instructs the host to make make a HTTP boot request to the Auto Deploy server
Auto Deploy queries the rules engine for information about the host
An image profile and a host profile is attached to the host based on a rule set
ESXi is installed into the hosts memory and it is loaded into vCenter
vCenter maintains the image profile and host profile for each host in its database

 

You can specify folders and datastores to use, otherwise Auto Deploy will place the host in the first datastore.

Setup Deployment Rules
You can use PowerCLI to create deployment rules.  Only build images with one software developer at a time.
Open PowerCLI and issue the following commands

  1. Add-PSSnapin VMware.DeployAutomation
  2. Add-PSSnapin VMware.ImageBuilder
  3. Connect-VIServer ‘vcenter_server_name’
  4. Add-ESXSoftwareDepot “location_of_zip”
  5. Get-ESXImageProfile (To confirm it is loaded)
  6. New-DeployRule -Name ‘name_of_rule’  -Item “ESXImageProfile”, “name_of_location_in_vCenter” -Pattern “MAC=MAC_address_of_host”  (can also use IP address, you can define this in the DHCP reservation options)
  7. Add-DeployRule -DeployRule “name_of_rule”  This adds the rule to the active set rule 

For a more detailed step-by-step guide check out Duncan Eppings guide

 

Summary
Ok, so now you know what’s new in vSphere 5  it is time to build this yourself so you can see the changes first hand, then once you have had an opportunity to get to grips with it, go and book the VCP 5 exam.  Have a read of the VCP5 Exam Blueprint to ensure you are happy with everything covered and I am sure you will pass the exam no problems.  As I said at the beginning build a lab and practice, practice and practice.  Personally I would recommend staying away from multiple choice practice tests, if you want to know what the exam will be like then take the mock exam on the VMware website, but stay away from the others, quite often the answer given on these sites are wrong!!
Having said that Simon Long does have some good questions related to vSphere 5 configuration maximums which are worth checking out after you have had lots of practice in your home lab just to make sure you are able to answer any questions related to configuration maximums etc.

Best of luck with your VCP, if you found this article useful please leave a comment below.

vSphere 5 – What’s New (and relevant for the VCP 5) (Part 2)

This is part 2 of my guide, you can read part 1 here

Storage

Quite a few improvements have been made to storage with vSphere 5.

Storage Profiles
Profile driven storage allows SLAs to be set to certain storage types,  For example this together with Storage DRS (explained below) can be used to automatically keep storage tiered so that high I/O VMs can remain on SSD drives.
Profile driven storage uses VASA, the vSphere Aware Storage API and vCenter to continually monitor the storage to ensure that the SLA’s for the storage profile are being met.

You can configure Storage Profiles from the Home screen in vCenter Server.


vSphere Storage Appliance
The vSphere Storage Appliance (VSA) is designed as a way of providing the cool features of vSphere, such as vMotion, HA and DRS without the requirement for an expensive SAN array.  It is made up of two or three ‘greenfield’ ESXi servers which act as dedicated storage devices.  It is not possible to have any virtual machines running on these servers, but the costs of such servers can be much cheaper than a dedicated SAN.

The VSA creates a VSA Cluster.  It uses shared datastores for all hosts in the cluster and stores a replica of each shared datastore on the other hosts in the cluster.  It presents this local storage as mirrored NFS datastores, where-by the mirror is the copy of the NFS datastore on one of the other ESXi host in the cluster.

                    Graphic courtesy of VMware.com

The vSphere Storage Appliance is managed by VSA manager through the vCenter Server.  The VSA manager allows replacement of a failed VSA cluster member and recovery of an existing VSA cluster.  To prevent split-brain scenarios a majority node cluster is required.
With a three-node cluster a majority of two nodes is required and with a two-node cluster a VSA Cluster Service acts as a tertiary node. The VSA Cluster Service run on vCenter Server.

The VSA cluster will enable vMotion and HA on the cluster for the virtual machines that are running on the VSA cluster.

It is an important design consideration that the storage requirements for the hosts in a VSA cluster will at least double to an account for the volume replicas that are created.

Limitations
No VMs on the ESXi hosts participating in the VSA cluster
No virtual vCenter Server anywhere on the cluster
One datastore on each server, and it must be local storage
Only a short list of servers officially supported, however it will probably work with most
ESXi hosts must be the same hardware configuration
Minimum 6GB RAM per host
RAID controller that supports RAID 10 per host

VSA network traffic is spilt into front end and back end traffic

Front End
: enables communication between
Each VSA cluster member and VSA manager
ESXi and NFS volumes
Each VSA cluster member and the VSA cluster service

Back End: enables communication between
Replication between NFS volume and its replica that resides on another host
Cluster communication between all VSA cluster members
vMotion and Storage vMotion traffic between the hosts

Further information is available in the vSphere Storage Appliance Technical Whitepaper & Evaluation Guide

VMFS5
VMFS5 uses the GUID partition table (GPT) format partitioning table rather than MBR partitioning table, this allows for much larger partition sizes.  The maximum volume size is now 64TB.   In fact a single extent can be up to 64TB and  RDMs can be greater than 2TB, up to 64TB.
If upgrading to ESXi, it keeps the existing MBR format and block sizes.  Once upgraded go to configuration tab in datastores and select upgrade to VMFS5, then it will support the larger sizes, however the underlying LUN will remain in the MBR format until it is reformatted.

Using GUID, all new datastore block sizes are now 1MB, however if you do upgrade then the block sizes will remain the same for that upgraded hosts datastores.  Not that this is an issue as explained you will be able to support up to 64TB volumes.

The underlying sub-block is now 8KB rather than 64KB, these means that less space is wasted due to stranded data from small files in the larger blocks.  Once sub blocks get to 1mb they will be in 1mb block sizes.  If a file is less than 1kb size then the file is stored in the file descriptor, which is a 1kb file that sits on each storage array.

VMFS5 uses Atomic Test and Set (ATS) for all file locking, ATS is an advanced form of file locking with smaller overheads when accessing the storage metadata than what was used with VMFS3.  ATS is part of  vSphere Storage APIs for Array Integration (VAAI).  ATS was available in v4.1.

vSphere API for Array Integration (VAAI)
As mentioned VAAI includes Atomic test and set (ATS) for file locking,  as well as full copy, block zero and T10 compliance.  VAAI also adds reduced CPU overhead on the host.  Hardware acceleration for NAS has also been added.  VAAI doesn’t use ATS for file locking with NFS datastores.

The following primatives are available for VAAI NAS.
Reserve space: enables storage arrays to allocate space for a VMDK in thick format
Full file clone: enables hardware-assisted offline cloning of offline virtual disk files
Fast file clone: Allows linked-clone creation to be offloaded to the array.  Currently only supported with VMware View.

This means that NAS storage devices will now support Thin, Eager Zeroed Thick and Lazy Zeroed Thick format disks, allowing disk preallocation.

SSD enhancements
VMKernel can automatically detect, tag and enable an SSD LUN.  This is particularly useful with Storage Profiles.

Storage IO Control (SIOC) enhancements
SIOC is now supported on NFS datastores.  Behaviour of SIOC for NFS datastores (volumes) is similar to that for VMFS datastores (Volumes).  Previously SIOC was only supported on Fibre Channel and iSCSI-connected storage.
If you want to set a limit based on MBps rather than IOPS, you can convert MBps to IOPS based on the typical I/O size for that virtual machine.  For example to restrict a virtual machine with 64KB  per I/Os to 10MBps set the limit to 160 IOPS. (IOPS = (MBps throughput / KB per IO) * 1024)

Storage vMotion
svMotion now supports snapshots.  It no longer uses Change block tracking, instead it uses I/O mirroring mechanism, a single pass copy of the source to the destination disk, meaning shortened migration times.

Storage DRS
Storage DRS or SDRS is a cool new feature that as the name suggests allows for the distribution of resources on the storage LUNs.  There are a couple of options to choose from when setting, from No Automation and Fully Automated.  The automated load balancing is based on storage space utilization and disk latency.
At this time it is recommended that you configure storage DRS for all LUNs/VMs but set it to manual mode.  This is recommended by VMware to allow you to double check the recommendations made by SDRS and then either accept the recommendations or not.  The recommendations are based on disk usage and latency however I expect that soon it will prove itself a most valuable asset in cluster design, removing the need to work out how much space is required per LUN or where to place the disk intensive applications in the cluster.  It is possible to disable SDRS on a schedule, for instance when performing a backup due to the increase in load on the datastores, you don’t want SDRS to start moving virtual disks around every time latency increases due to routine backups.  SDRS needs to run for 16 hours for it to take effect.
SDRS requires Enterprise plus licensing.

A SDRS cluster is set through Inventory>Datastores and Datastore Clusters.

Further information on VMware website

Software Fibre Channel over Ethernet (FCoE)
vSphere 4 introduced support for Fibre Channel over Ethernet (FCoE) with hardware adapters, now software FCoE is supported in vSphere 5. So now it is possible to use NICs that support partial FCoE offload.  NICs with partial FCoE Offload are hardware adapters that contains network and FC functionalities on the same card.  It is also referred to as a converged network adapter (CNA)

To configure
1. Connect the VMkernel to the physical FCoE NICs installed on the host
2. activate the software FCoE adapters on the ESXi host so that the host can access the fibre channel storage

Only one VLAN is supported for software FCoE in vSphere, and you can have a maximum of four software FCoE adapters on one host.

Further information can be found in the Whats New vSphere 5 Technical Whitepaper

Coming Soon….Thin Provisioning
Ok not a new feature per-se, but improvements have been made with hardware acceleration for thin provisioning, it helps in reclaiming space and also in monitoring usage of thin provisioned arrays. It Works on VMFS3 and VMFS5 (Providing you are using ESXi 5)
With the older form of thin provisioning, problems can occur with accumulation of dead space.  VMFS5 will be able to reclaim dead space by informing the array about the datastore space freed when files are deleted or removed by svMotion.  It also monitors space usage.  This feature monitors space usage on thin-provisioned LUNs and helps administrators avoid the out-of-space conditions with built in alarms.

Continued in part 3

vSphere 5 – What’s New (and relevant for the VCP 5) (Part 1)

vSphere 5 has been available for a couple of months now so now is an ideal time to look at upgrading your infrastructure and also to start the upgrade path for the VCP 4s out there to the VCP5.  This is by no means a complete study guide, rather an introduction into the various new features and components of vSphere 5 over and above what was already available in vSphere 4.  To really understand all the new features of vSphere 5 you need to work with it, so if you don’t have one already, go build a home lab.  Any PC will do with VMware workstation as you can run virtual machine versions of ESXi, it is supported by VMware, you may need a RAM upgrade though.  An alternative is to build a whitebox server.  Ray Heffer has written a good whitebox home lab guide, I suggest checking it out if you are not sure where to start.  Also check out the whitebox HCL at vm-help.com for ideas.  The site only lists support for vSphere 4 and earlier but most components should work with ESXi 5.

Vsphere 5 Enhancements

System Requirements
There have been a few changes in the requirements for installing and upgrading to vSphere 5.  I am working on an upgrade guide which will contain further information on the steps required to perform an upgrade, but for now I will just go over the requirements.  If you are eager to get installing then you can check out the vSphere Upgrade Guide.
[table id=2 /]
Update Manager
Update Manager has been optimised for cluster remediation, rather than per host.  It includes the ability to schedule a reboot after VMware tools have been installed or after a VM hardware upgrade.  Support  is also included for Update Manager download service 5.0 (UMDS 5.0) the command line driven update utility.
It also adds a nifty migrate ESX to ESXi function, allowing existing ESX servers to be migrated to ESXi without loosing all existing datastores and virtual machines.  IVOBEERENS.NL has a guide explaining the steps involved.
It’s not all good news though as VMware have now removed VM patching functionality.  Update manager needs to be installed on a 64-bit OS, like vCenter Server.

ESXi
vSphere 4.1 represented the last ESX version available.  Now it is purely ESXi.  ESXi is now a stateless OS, capable of running only in memory, removing the requirements for an existing local storage device.  This is only possible when PXE booting using Auto Deploy. (Auto Deploy will be explained in part 3)
[table id=1 /]
Licensing
Advanced licensing is no more, it is now only Standard, Enterprise and Enterprise plus.  All license models are based on vRAM usage.  (Amount of RAM currently active)

Each license type entitles you to a specific amount of vRAM per license you purchase.
[table id=3 /]
The web client (server) needs to be installed in order to see the reporting of the vRAM pooled pricing in the licensing tab on vCenter.

Management

VMware have introduced some new management tools in vSphere 5.  Most notably the vCenter Server Appliance and the Web Client.

vCenter Server Appliance
The new vCenter Server Appliance (vCSA) is based on SUSE linux enterprise server (SLES 11).  It supports pretty much all the features available through the standard vCenter Server, which makes you wonder how long it will be until VMware will stop making a Windows OS based vCenter Server.  This will of course mean no requirement for a Windows OS license or ‘standard’ vCenter Server install needs to be done, which will make deployment much quicker.

There is an optional embedded database, DB2 Express which supports 5 hosts/50 virtual machines and it also supports external Oracle database which adds support for 300 hosts/3000 virtual machines.  It doesn’t support Microsoft SQL however.  It also doesn’t support Linked Mode, vCenter Heartbeat, plug-ins or single sign on using Windows session credentials.  Also the vCenter Storage Appliance (VSA) isn’t currently compatible with the vCSA.
The vCSA is configured through Web interface and can authenticate with Active Directory, you can also do initial configuration through a text based console, initial configuration such as network and time zone settings.

The vCenter Server Appliance default login is
root
vmware

The vCSA is made available as an OVF template.  It is made up of an appliance data disk and an appliance system disk.
Setting it up is very simple just go to vCenter Inventory Service on the console, click test settings and save settings on the vCenter database settings then start the vCenter Services on the vCSA virtual machine.

Further information see Configuring the VMware vCenter Server Appliance on the vSphere 5 Documentation Center

Web Client
The all new web client is built using Adobe Flex, the web client will be the future interface of vSphere administration, the current vSphere Client which is built with C# will be discontinued.  To run it requires Internet Explorer version 7 or 8 or Firefox 3.5 or 3.6 and adobe flash 10 on both the client and server side.
At this moment in time no plug-ins are currently supported, however as this will be the client of the future I am certain that it will be added in a future update.

The web client is required on the vCenter Server in order to be able to view the vRAM pool utilisation report in the regular vSphere Client.

In order to use the web client you have to register it in the vCenter Server by going to the web client page as shown below.

Once installed and configured you can do pretty much everything you can in the standard vSphere Client.

Syslog Collector
During setup you can create a syslog server, in fact ESXi will alert you to the fact that it hasn’t been configured until you set one.This can be any location and you can use the same location for multiple ESXi hosts.  Just make sure it is somewhere off of the virtual infrastructure you are logging!

Networking

Firewall
To ensure protection from would-be hackers ESXi 5 now includes a firewall.  Unlike the firewall previously used with ESX, this firewall is not based on iptables.
The new firewall is service-oriented and stateless.  Users restrict access to specific services and ESXi maps the appropriate ports.  This is configurable through the firewall section of the software settings area of the configuration tab is the vSphere Client.


It is also possible to configure firewall settings and apply those to multiple hosts using host profiles.

The firewall can be configured through the command line using the five main options.  Get, set, refresh, load and unload.  It is now possible to restrict access from individual IPs or ranges rather than opening ports up to any traffic sources as was the case with previous versions.

New Esxcli Network Firewall commands
get – Returns the status of the firewall
set-defaultaction –  Updates default actions
set-enabled –  Enables or disables the ESXi firewall
load – Loads the firewall module and rules
refresh – Refresh firewall configuration by reading the rule set files if the firewall module is loaded
unload – Destroy filters and unloads the firewall modules  (Use with caution!)
ruleset list – List rule set information
ruleset set-allowedall – Sets the allowed all flag
ruleset set-enabled – Enables or disables the specified rule set
ruleset allowedip list – Lists the allowed IP addresses of the specified rule set
ruleset allowedip add – Allows access to the rule set from the specified IP address or range of IP addresses
ruleset allowedip remove – Removes access to the rule set from the specified IP address or range of IP addresses

Port Requirements
A few changes have been made to the port requirements.  They required ports are shown in the table below.
[table id=4 /]
dvSwitch
The dvswitch now supports Link Layer Discovery Protocol (LLDP), Netflow and Port Mirroring.

LLDP
LLDP enables VMware admins to see the configuration of physical switches.  LLDP is set through the advanced settings of the dvswitch by choosing the options listen, advertise or both.

Netflow
Netflow helps to monitor the network to gain visibility  of VM traffic.  It can be used for profiling and billing purposes.  It allows intrusion detection, network forensics and compliance.  It sends networking data to a third party analyser.  A flow is a unidirectional sequence of packets that share the same properties.
Netflow catches two types of flows:
Internal flows – that is traffic on the same host
External flows – traffic on different hosts, i.e. physical to virtual machines.

To configure go to dvSwitch settings and click on the Netflow tab and enter the Netflow collector settings.

Once configured go to dvPortgroup settings>Monitoring>Netflow Status and set to enabled.

Port Mirroring
Port mirroring also referred to as Switch Port Analyser (SPAN) on Cisco switches send a copy of network packets seen on a switch port to a another switch port, i.e. sending mirrored data to a network monitoring device attached to a different port.  Port mirroring overcomes the limitations associated with promiscuous mode. This can be useful for admins who need granular network information to troubleshoot network issues.  This works in the same way as it does for the physical switches in the environment.

Network IO Control (NIOC) Enhancements
Network IO Control splits traffic into resource pools which can be prioritised.  This was introduced in vSphere 4.1, however vSphere 5 has introduced features such as user-defined resource pools, IP tagging and a bandwidth ‘cop’ for HBR (Host Based Replication) traffic.  (See Site Recovery Manager for more info on HBR)
User-defined network resource pools are similar to CPU and memory resource pools in that you can proiritise traffic I/O based on various requirements.  These pools are set in the resource allocation tab of the dvSwitch.

Continued in part 2