Tag : view

Connecting to an iSCSI SAN with Jumbo Frames enabled

The best way to add iSCSI storage is by utilizing dedicating NIC’s to iSCSI traffic, on dedicated VMkernel switches, with separate IP subnet address ranges and separate physical switches or VLAN’s.

Enable Jumbo Frames on a vSwitch
To enable Jumbo Frames on a vSwitch, change the MTU configuration for that vSwitch.  It is best to start with a new switch when setting this up as you will need to delete the existing port groups in order to allow jumbo frames to pass through the port group.
In order to run the necessary commands connect to the host using the vSphere CLI which can be downloaded from the VMware website.

To run a vSphere CLI command on Windows
Open a command prompt.
Navigate to the directory in which the vSphere CLI is installed.
cd C:Program FilesVMwareVMware vSphere CLIbin3
Run the command, passing in the connection options and any other options.
<command>.pl <conn_options> <params>
The extension .pl is required for most commands, but not for esxcli.

Example
vicfg-nas.pl –server my_vcserver –username username –password mypwd –vihost my_esxhost –list

Procedure
Create a new vSwitch and assign the appropriate uplink.
Open the vSphere CLI and run
vicfg-vswitch –server my_vcserver –username username –password mypwd –vihost my_esxhost -m MTU vSwitch command.

This command sets the MTU for all physical NICs on that vSwitch. The MTU size should be set to the largest MTU size among all NICs connected to the vSwitch.
Run the vicfg-vswitch -l command to display a list of vSwitches on the host, and check that the configuration of the vSwitch is correct.

Create a Jumbo Frames-Enabled VMkernel Interface
Use the vSphere CLI to create a VMkernel network interface that is enabled with Jumbo Frames.
On the vSphere CLI, run the vicfg-vmknic command to create a VMkernel connection with Jumbo Frame support.

Procedure
vicfg-vmknic -a -I ‘ip address’ -n netmask -m MTU ‘port group name’

Check that the VMkernel interface is connected to a vSwitch with Jumbo Frames enabled.
Run the vicfg-vmknic -l command to display a list of VMkernel interfaces and check that the configuration of the Jumbo Frame-enabled interface is correct.
Configure all physical switches and any physical or virtual machines to which this VMkernel interface connects to support Jumbo Frames.

Create Additional iSCSI Ports for Multiple NICs
Log in to the vSphere Client and select the host from the inventory panel.
Click the Configuration tab and click Networking.
Select the vSwitch that you use for iSCSI and click Properties.
Connect additional network adapters to the vSwitch.
In the vSwitch Properties dialog box, click the Network Adapters tab and click Add.
Select one or more NICs from the list and click Next.
with dependent hardware iSCSI adapters, make sure to select only those NICs that have a corresponding iSCSI component.
Review the information on the Adapter Summary page, and click Finish.
The list of network adapters reappears, showing the network adapters that the vSwitch now claims.

Create iSCSI ports for all NICs that you connected.
The number of iSCSI ports must correspond to the number of NICs on the vSwitch.

Procedure
In the vSwitch Properties dialog box, click the Ports tab and click Add.
Select VMkernel and click Next.
Under Port Group Properties, enter a network label, for example iSCSI, and click Next.
Specify the IP settings and click Next.
When you enter subnet mask, make sure that the NIC is set to the subnet of the storage system it connects to.
Review the information and click Finish.

CAUTION If the NIC you use with your iSCSI adapter, either software or dependent hardware, is not in the same subnet as your iSCSI target, your host is not able to establish sessions from this network adapter to the target.

Map each iSCSI port to just one active NIC.
By default, for each iSCSI port on the vSwitch, all network adapters appear as active. You must override this setup, so that each port maps to only one corresponding active NIC. For example, iSCSI port vmk1 maps to vmnic1, port vmk2 maps to vmnic2, and so on.

Procedure
On the Ports tab, select an iSCSI port and click Edit.
Click the NIC Teaming tab and select Override vSwitch failover order.
Designate only one adapter as active and move all remaining adapters to the Unused Adapters category.
Repeat the last step for each iSCSI port on the vSwitch.

Configure iSCSI binding to iSCSI adapters
Identify the name of the iSCSI port assigned to the physical NIC. The vSphere Client displays the port’s name below the network label.

In the following graphic, the ports’ names are vmk1 and vmk2.

Use the vSphere CLI command to bind the iSCSI port to the iSCSI adapter.
esxcli swiscsi nic add -n port_name -d vmhba

IMPORTANT For software iSCSI, repeat this command for each iSCSI port connecting all ports with the software iSCSI adapter. With dependent hardware iSCSI, make sure to bind each port to an appropriate corresponding adapter.
Verify that the port was added to the iSCSI adapter.
esxcli swiscsi nic list -d vmhba
Use the vSphere Client to rescan the iSCSI adapter.

This example shows how to connect the iSCSI ports vmk1 and vmk2 to the software iSCSI adapter vmhba33.
1 Connect vmk1 to vmhba33: esxcli swiscsi nic add -n vmk1 -d vmhba33.
2 Connect vmk2 to vmhba33: esxcli swiscsi nic add -n vmk2 -d vmhba33.
3 Verify vmhba33 configuration: esxcli swiscsi nic list -d vmhba33.
Both vmk1 and vmk2 should be listed.

If you display the Paths view for the vmhba33 adapter through the vSphere Client, you see that the adapter uses two paths to access the same target. The runtime names of the paths are vmhba33:C1:T1:L0 and vmhba33:C2:T1:L0. C1 and C2 in this example indicate the two network adapters that are used for multipathing.

The next thing is to configure the switch with the relevant settings. For this I have used two Dell Powerconnect 5448 switches and a Dell EqualLogic PS4000XV SAN, however the information is relevant for most Dell switch and SAN combination and most other brands too. The commands may differ slightly but the principals are the same.

Configuring the iSCSI SAN switches

Turn on flow control on the switches:
console> enable
console#
configure
console (config)#
interface range ethernet all
console (config-if)#
flowcontrol on

Enable Spanning tree and portfast globally
Console (config)# spanning-tree mode rstp
Console (config)#interface range ethernet all
console (config-if)# spanning-tree portfast

Confirm Unicast Storm Control is disabled with
console# show ports storm-control
Should return State Disabled as show in the image

Check iSCSI awareness is enabled using
Console# config
console(config)#iscsi enable

Disable STP on ports that connect SAN end nodes
console (config)#
interface range ethernet g1,g3
console (config)# spanning-tree disable
console (config-if)# exit

Enable LAG between switches
Disconnect switches from each other before doing following config on both.  Then connect ports 5,6,7, and 8
console (config)# interface range ethernet g5,g6,g7,g8
console(config-if)#
channel-group 1 mode on
console(config-if)#
exit
console(config)#
interface port-channel 1
console(config-if)#
flowcontrol on
console(config-if)# exit

Enable jumbo frames on iSCSI ports (This command will enable it on all ports)
console (config)# port jumbo-frame
This setting will take effect only after copying running configuration to startup configuration and resetting the device.

configure VLANS for vMotion
console(config)# vlan database
console(config-vlan)# vlan 2
console(config-vlan)# exit
console(config)# interface vlan 2
console(config-if)# name vMotion
console(config)# interface range ethernet g2,g4
console(config-if)# switchport mode general
console(config-if)# switchport general pvid 2
console(config-if)# switchport general allowed vlan add 2 tagged
console(config-if)# switchport general acceptable-frame-type tagged-only
console(config-if)# exit
console(config)# interface vlan 2
console(config-if)# ip address 10.10.10.1 255.255.255.0
console(config-if)# exit
console(config)# exit
console# copy running-config startup-config
Overwrite file [startup-config] ?[Yes/press any key for no].
Console# reload

Log into switch and set name and time synchronisation options.

VMware View 4.6 Overview

VMware View 4.6

VMware View 4.6 is out, and with it come new features.  A full list of improvements is available here.

In the words of VMware, VMware View is the leading desktop virtualisation solution.  It provides a virtualised desktop infrastructure which can leverage existing virtual infrastructures and provide a cost effective centrally managed desktop deployment.

VMware View offers the ability for desktop administrators to virtualize the operating system, applications, and user data and deliver modern desktops to end-users.

View Manager

VMware View Manager is an enterprise-class virtual desktop manager, and a critical component of VMware View.

IT administrators use VMware View Manager as a central point of control for providing end-users with secure, flexible access to their virtual desktops and applications, leveraging tight integration with VMware vSphere to help customers deliver desktops as a secure, managed service. Extremely scalable and robust, a single instance VMware View Manager can broker and monitor tens of thousands of virtual desktops at once, using the intuitive Web-based administrative interface for creating and updating desktop images, managing user data, enforcing global policies, and more.

Ok, so that’s the official description, but how does it all fit together?
VMware View is made up of the following core components.

View Manager Components

VMware View Connection Server—Manages secure access to virtual desktops, works with VMware vCenter Server to provide advanced management capabilities.

VMware View Agent—Provides session management and single sign-on capabilities.
VMware View Client—Enables end-users on PCs and thin clients to connect to their virtual desktops through the VMware View Connection Server.
Use View Client with Local Mode to access virtual desktops even when disconnected without compromising on IT policies.
VMware vCenter Server with View composer —Enables administrators to make configuration settings, manage virtual desktops and set entitlements of desktops and assignment of applications.
View transfer server – to transfer desktops to client PC’s and laptops with offline mode.
View Security Server – A View Security Server (in a DMZ) is also an option.  This will allow RDP and PCoIP connections over the WAN.

This diagram from the VMware Visio templates depicts a typical View deployment, taking advantage of View Linked Clones with Offline Mode, ThinApp and PCoIP.

 

Servers required

  • Domain Controller
  • vCenter Server – View manager installed (cannot use IIS or be a domain controller)
  • View Connection server, preferably two (cannot have any other View roles, use IIS or be a domain controller)
  • View transfer server for Linked-Clones with Offline Mode (Cannot have any other roles.  Can be a physical server)
  • Database server for events and View Composer database
  • Optional View Security Server for WAN RDP and PCoIP connectivity

View Composer

View Composer is installed on the vCenter Server, it provides storage-saving linked clones, rapid desktop deployment, quick update, patch management and tiered storage options.
View Composer can utilise Quickprep or Sysprep.  System automation tools for creating unique operating system instances in Microsoft Active Directory.
Changes to the master images can be sent out to all linked clones by running a recompose operation.  Running a refresh operation on a linked clone synchronises it with the master image.
This is useful if users are experiencing issues with their linked clone, it is a way of setting it back to default.
Each user in a linked clone can have their own persistent data disk which will contain all of their unique user data, documents and settings.

Linked-Clones with Offline Mode

A linked clone is made from a snapshot of the parent.  All files available on the parent at the moment of the snapshot continue to remain available to the linked clone. On-going changes to the virtual disk of the parent do not affect the linked clone, and changes to the disk of the linked clone do not affect the parent.  This provides a secure master template machine that can be used to create additional clones.

A linked clone must have access to the parent. Without access to the parent, a linked clone is disabled.

Offline mode allows users to check out the desktop and use it on a PC or laptop, for instance when travelling on a train and then check it back in and synchronise the changes when returning to the office.

VMware ThinApp

ThinApp simplifies application delivery by encapsulating applications in portable packages that can be deployed to many end point devices while isolating applications from each other and the underlying operating system.

ThinApp virtualizes applications by encapsulating application files and registry into a single ThinApp package that can be deployed, managed and updated independently from the underlying operating system (OS). The virtualized applications do not make any changes to the underlying OS and continue to behave the same across different configurations for compatibility, consistent end-user experiences, and ease of management.

PCoIP

PCoIP supports WAN connections with less than 100kbps peak bandwidth with up to 250ms of latency however I recommend a minimum 1Mbps upload speed across the WAN with less than 150ms of latency.
PCoIP sessions average bandwidth for an active office worker may be in the 80-150kbps range.  This drops to nearly zero when not in use.
It is recommended that the infrastructure is using an offload card as PCoIP rendering is fairly resource intensive on the hosting server.
A PCoIP security gateway removes the need for a VPN connection.  This became available in the latest VMware View 4.6 release.
Modern thin client devices like the zero clients from Wyse are designed specifically for connecting to a virtual desktop environment, these devices support PCoIP out of the box with no major configuration required to connect them to the virtual desktop infrastructure.

vShield Endpoint

vShield Endpoint provides an API to allow third party anti-virus vendors a way of scanning  machines at the Hypervisor level, rather than at the individual virtual machine level, removing unnecessary load from the individual clients.

In the future this will be the standard way that anti virus scanning will be completed with virtual desktop infrastructure, and server infrastructure also.  The current offerings are from Trend-Micro only which is limited to scanning 15 machines per virtual appliance.  But future developments from other providers may support more virtual machines.

vShield Endpoint is included in the cost of VMware View Premier.

ThinPrint

ThinPrint allows a view client to utilise the print devices installed on the connecting client machine so that a user can seamlessly print to their default local printer without having to install any drivers.

Licensing

View Licensing

VMware View is available using two licensing models, Enterprise and Premier.  The differences between the two are illustrated in the table below.

Microsoft Licensing

Windows 7 requires a KMS server for automatic server provisioning.  This can be a 2003, 2008 and a 2008 R2 server however they have the following caveats.

  • Must have at least 5 Servers checked in for server activation to occur or 25 Windows 7 or Vista machines checked in for client activation to occur.
  • Windows Server 2008 is not supported as a KMS host to activate Windows 7 and Office 2010.
  • A patch is available to allow activation of Windows 7 client machines. (A Windows Server 2008 R2 KMS key is required.)
  • A patch is not available to allow activation of Office 2010 clients.

Hardware

Hardware requirements will vary depending on individual circumstances, however as a ball park figure use the figures below as a guideline.

A view infrastructure to support 30 & 100 users will require the following core components.

Server Hardware

30 users
Two ESXi hosts (Minimum) ideally three, two for Workstations one for servers.  (Existing Virtual infrastructure will do for servers.) (Approx. 32GB RAM, dual core, For 30 VM’s)

100 users
Four ESXi hosts Three for Workstations one for servers (Approx. 48GB RAM, dual core, R710 for 35 VM’s)

Storage

To leverage the advanced VMware features HA and DRS, shared central storage is required.
This can be achieved using a storage area network. (SAN)