Archive for : August, 2010

VMware NIC Trunking Design

Having read various books, articles, white papers and best practice guides I have found it difficult to find consistently good advice on vNetwork and physical switch teaming design so I thought I would write my own based on what I have tested and configured myself.

To begin with I must say I am no networking expert and may not cover some of the advanced features of switches, but I will provide links for further reference where appropriate.

 

The basics

Each physical ESX(i) host has at least one physical NIC (pNIC) which is called an uplink.

Each uplink is known to the ESX(i) host as a vmnic.

Each vmnic is connected to a virtual switch (vSwitch).

Each virtual machine on the ESX(i) host has at least one virtual NIC (vNIC) which is connected to the vSwitch.

The virtual machine is only aware of the vNIC, only the vSwitch is aware of the uplink to vNIC relationship.

This setup offers a one to one relationship between the virtual machine (VM) connected to the vNIC and the pNIC connected to the physical switch port, as illustrated below.

When adding another virtual machine a second vNIC is added, this in turn is connected to the vSwitch and they share that same pNIC and the physical port the pNIC is connected to on the physical switch (pSwitch).

When adding more physical NIC’s we then have additional options with network teaming.

 

NIC Teaming

NIC teaming offers us the option to use connection based load balancing, which is balanced by the number of connections and not on the amount of traffic flowing over the network.

This load balancing can provide us resilience on our connections by monitoring the links and if a link goes down, whether it’s the physical NIC or physical port on the switch, it will resend that traffic over the remaining uplinks so that no traffic is lost.  It is also possible to use multiple physical switches provided they are all on the same broadcast range.  What it will not do is to allow you to send traffic over multiple uplinks at once, unless you configure the physical switches correctly.

There are four options with NIC teaming, although the fourth is not really a teaming option

  1. Port-based NIC teaming
  2. MAC address-based NIC teaming
  3. IP hash-based NIC teaming
  4. Explicit failover

Port-based NIC teaming

Route based on the originating virtual port ID or port-based NIC teaming as it is commonly known as will do as it says and route the network traffic based on the virtual port on the vSwitch that it came from.   This type of teaming doesn’t allow traffic to be spread across multiple uplinks.  It will keep a one to one relationship between the virtual machine and the uplink port when sending and receiving to all network devices.  This can lead to a problem where the amount of physical ports exceeds the number of virtual ports as you would then end up with uplinks that don’t do anything.  As such the only time I would recommend using this type of teaming is when the amount of virtual NIC’s exceeds the number of physical uplinks.

MAC address-based NIC teaming

Route based on MAC hash or MAC address-based NIC teaming sends the traffic out of the originating vNIC’s MAC address.  This works in a similar way to the port-based NIC teaming in that it will send its network traffic over only one uplink.  Again the only time I would recommend using this type of teaming is when the amount of virtual NIC’s exceeds the number of physical uplinks.

IP hash-based NIC teaming

Route based on IP hash or IP hash-based NIC teaming works differently from the other types of teaming.  It takes the source and destination IP address and creates a hash.  It can work on multiple uplinks per VM and spread its traffic across multiple uplinks when sending data to multiple network destinations.

Although IP-hash based can utilise multiple uplinks it will only use one uplink per session.  This means that if you are sending a lot of data between one virtual machine and another server that traffic will only transfer over one uplink.  Using the IP hash based teaming we can then use teaming or trunking options on the physical switches.  (Depending on the switch type)  IP hash requires Ether Channel (again depending on switch type) which for all other purposes should be disabled.

Explicit failover

This allows you to override the default ordering of failover on the uplinks.  The only time I can see this being useful is if the uplinks are connected to multiple physical switches and you wanted to use them in a particular order.  Either that or you think a pNIC In the ESX(i) host is not working correctly.  If you use this setting it is best to configure those vmnics or adapters as standby adapters as any active adapters will be used from the highest in the order and then down.

 

 

The other options

Network failover detection

There are two options for failover detection.  Link status only and beacon probing.  Link status only will monitor the status of that link, to ensure that a connection is available on both ends of the network cable. If it becomes disconnected it will mark it as unusable and send the traffic over the remaining NIC’s.  Beacon probing will send a beacon up the network on all uplinks in the team.  This includes checking that the port on the pSwitch is available and is not being blocked by configuration or switch issues.  Further information is available on page 44 of the ESXi configuration guide.  Do not set to beacon probing if using route based on IP-hash.

 

Notify switches

This should be left set to yes (default) to minimise route table reconfiguration time on the pSwitches.  Do not use this when configuring Microsoft NLB in unicast mode.

Failback

Failback will re-enable the failed uplink when it is working correctly and send the traffic over it that was sent over the standby uplink.  Best practice is to leave this set to yes unless using IP based storage.  This is because if the link were to go up and down quickly it could have a negative impact on iSCSI traffic performance.

Incoming traffic is controlled by the pSwitch routing the traffic to the ESX(i) host, and hence the ESX(i) host has no control over which physical NIC traffic arrives. As multiple NIC’s will be accepting traffic, the pSwitch will use whichever one it wants.

Load balancing on incoming traffic can be achieved by using and configuring a suitable pSwitch.

pSwitch configuration

The topics covered so far describe egress NIC teaming, with physical switches we have the added benefit of using ingress NIC teaming.

Various vendors support teaming on the physical switches, however quite a few call trunking teaming and vice-versa.

From the switches I have configured I would recommend the following.

All Switches

A lot of people recommend disabling Spanning Tree Protocol (STP) as vSwitches don’t require it as they know the MAC address of every vNIC connected to it.  I have found that the best practice is to enable STP and set it to Portfast.  Without Portfast enabled there can be a delay whereby the pSwitch has to relearn the MAC addresses again during convergence which can take 30-50 seconds.  Without STP enabled there is a chance of loops not being detected on the pSwitch.

802.3ad & LACP

Link aggregation control protocol (LACP) is a dynamic link aggregation protocol (LAG) which can dynamically make other switches aware of the multiple links and combine them into one single logical unit.  It also monitors those links and if a failure is detected it will remove that link from the logical unit.

VMware doesn’t support LACP.  However VMware does support IEEE 802.3ad which can be achieved by configuring a static LACP trunk group or a static trunk.  The disadvantage of this is that if one of those links goes down, 802.3ad static will continue to send traffic down that link.

 

Dell switches

Set Portfast using

Spanning-tree portfast

To configure follow my Dell switch aggregation guide

Further information on Dell switches is available through the product manuals.

Cisco switches

Set Portfast using

Spanning-tree portfast (for an access port)

Spanning-tree portfast trunk (for a trunk port)

Set etherchannel

Further information is available through the Sample configuration of EtherChannel / Link aggregation with ESX and Cisco/HP switches

HP switches

Set Portfast using

Spanning-tree portfast (for an access port)

Spanning-tree portfast trunk (for a trunk port)

Set static LACP trunk using

trunk < port-list > < trk1 … trk60 > < trunk | lacp >

Further information is available through the Sample configuration of EtherChannel / Link aggregation with ESX and Cisco/HP switches

 

 

Upgrade an ESXi 4.0 Host to 4.1 with the vihostupdate Utility

1. Check for a scratch partition in the Software Advanced Settings in the Configuration tab of the vSphere Client. If one doesn’t exist configure one and reboot the host before proceeding with the upgrade. (See here for more info)

2. Download and install the VMware vSphere command line interface. (vSphere CLI)

3. Download the upgrade-from-ESXi4.0-to-4.1.0-0.0.build#-release.zip by clicking the VMware ESXi 4.1 Installable option. Save it on the machine with the vSphere CLI installed on it.

4. Power off running machines and place host in maintenance mode.

5. Install the bulletin by running the following from the vSphere CLI. CD “C:Program FilesVMwareVMware vSphere CLIbin” Vihostupdate.pl –server_ host_name_of_ IP_address -i -b location_of_the_ESXi_upgrade_ZIP bundle – B ESXi410-GA-esxupdate Enter username and password when prompted for it by the ESXi host

6. Install the upgrade bulletin by running the following from the vSphere CLI Vihostupdate.pl –server host name or IP address -i -b location of the ESXi upgrade ZIP bundle -B ESXi410-GA Enter username and password when prompted for it by the ESXi host

7. Verify that the bulletins are installed on the ESXi host by running the following. vihostupdate.pl –server host name or IP address –query

8. Reboot the host

SBS2008 name resolution issues

‘If you’re running Microsoft Small Business Server 2008 – AKA SBS 2008 – You may experience occasional internet problems resolving certain websites and domain names…

THe problem is caused by ‘Root Hints’ the method the server is using to resolve DNS requests.

There are several workarounds to the issue, these include restarting the DNS server service every few days (impractical), clearing the DNS cache (again not ideal), switching to DNS Frowarders (Resolves the DNS requests using your ISP’s DNS servers) and setting maxcachettl cache to 2 days or greater…

To set the maxcachettl cache to 2 days or more, use the following:

  1. Start Registry Editor (Regedit.exe)
  2. Drill down to the following registry key – HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesDNSParameters
  3. Right click in the right-hand window and select  New > DWORD (32-bit) Value then add give it the name MaxCacheTtl
  4. Double click the new value and select ‘Hexdecimal’ the enter the value 0x2A300
  5. Click OK
  6. Quit Registry Editor.
  7. Restart the DNS server service.

This should now resolve the intermittent DNS issue’

Powerchute Network Shutdown – ESXi/vMA install


1. Download VMA (vSphere Management Assistant)

1. Highlight VM Host> File> Deploy OVF Template> Browse to VMA Folder and Select the OVF> Next> Accept Licence> Next> Keep Default Disk Configuration> Next> Finish

1. This will create a new VM on the host.

1. Using the VIC attach a CD Drive to the VMA Virtual Machine

1. Start the Virtual Machine

1. Follow the Wizard (Default Option is show in Brackets)

Step1) Configure IP Address, Subnet, Gateway/DHCP or Static

Step2) Configure DNS Servers/DHCP or Static

Step3) Configure Hostname for the VIMA VM ie VIMA.domainname.local

Step 4) Confirm the settings

1. The VM will now apply the settings and restart the VM Network

1. Enter a password for the vi-admin account

1. Open a terminal emulation application such as Putty and Connect to the VIMA Vm using its IP Address on port 22 (SSH)

1. Login as vi-admin, using the password you created in the previous step

1. When you are connected you will be presented with a terminal.

Enter the following string into the terminal window

Vifp addserver “name of server” or “IP Address” (name of server preferred)

1. Enter the password for the host when prompted (Vmware Host Root User Password)

1.Enter the following string into the terminal window

Vifp listservers

1. This should return the IP Address and name of the Vmware Host that you just added

1.Enter the following string into the terminal window to enable FastPass to the Host

1. Vifptarget -s “SERVERNAME – Vmhost Server”

1. To confirm the above step has worked type

Vicfg-nics –l

This should return a list of NICS

1. Install the UPS, configure the Network management card and configure your settings with the UPS Management Console (browser)

1. Insert your media into the VMHost and attach it to the VMA Virtual Machine (ie CD)

1. Connect to the VMA management console via Terminal Emulator

1. Login to the Management Console

1. Create a mount point : sudo mkdir /mnt/cdrom

1. Change the permissions on the mount point: sudo chmod 666 /mnt/cdrom

Type: sudo mount –t iso9660 /dev/cdrom /mnt/cdrom

Type: cd /mnt/cdrom/ESXi

Type: sudo cp /etc/vma-release /etc/vima-release

Type: sudo ./install.sh

1. Accept the licence agreement

1. Press enter to keep default PowerChute Instance

1. Press enter to keep default installation directory

1. Confirm the installation

1. This will install the Java Runtime

1. Type: CD /opt /APC/PowerChute/group1Enter

1. Type: sudo ./PCNSConfig.sh

1. Enter your root Password

1. Select your UPS Configuration Option

1. Enter the Management Card IP Address

1. Select yes at the do you want to register these settings

1. Select Yes to starting the Powerchute Network Shutdown Service

1. You will then be show a configuration

  1. Download VMA (vSphere Management Assistant)
  1. Highlight VM Host> File> Deploy OVF Template> Browse to VMA Folder and Select the OVF> Next> Accept Licence> Next> Keep Default Disk Configuration> Next> Finish
  1. This will create a new VM on the host.
  1. Using the VIC attach a CD Drive to the VMA Virtual Machine
  1. Start the Virtual Machine
  1. Follow the Wizard (Default Option is show in Brackets)

Step1) Configure IP Address, Subnet, Gateway/DHCP or Static

Step2) Configure DNS Servers/DHCP or Static

Step3) Configure Hostname for the VIMA VM ie VIMA.domainname.local

Step 4) Confirm the settings

  1. The VM will now apply the settings and restart the VM Network
  1. Enter a password for the vi-admin account
  1. Open a terminal emulation application such as Putty and Connect to the VIMA Vm using its IP Address on port 22 (SSH)
  1. Login as vi-admin, using the password you created in the previous step
  1. When you are connected you will be presented with a terminal.
  1. Enter the following string into the terminal window
    1. Vifp addserver “name of server” or “IP Address” (name of server preferred)
  1. Enter the password for the host when prompted (Vmware Host Root User Password)
  1. Enter the following string into the terminal window
    1. Vifp listservers
  1. This should return the IP Address and name of the Vmware Host that you just added
  1. Enter the following string into the terminal window to enable FastPass to the Host
    1. Vifptarget -s “SERVERNAME – Vmhost Server”
  1. To confirm the above step has worked type

Vicfg-nics –l

This should return a list of NICS

  1. Install the UPS, configure the Network management card and configure your settings with the UPS Management Console (browser)
  1. Insert your media into the VMHost and attach it to the VMA Virtual Machine (ie CD)
  1. Connect to the VMA management console via Terminal Emulator
  1. Login to the Management Console
  1. Create a mount point : sudo mkdir /mnt/cdrom
  1. Change the permissions on the mount point: sudo chmod 666 /mnt/cdrom

Type: sudo mount –t iso9660 /dev/cdrom /mnt/cdrom

Type: cd /mnt/cdrom/ESXi

Type: sudo cp /etc/vma-release /etc/vima-release

Type: sudo ./install.sh

  1. Accept the licence agreement
  1. Press enter to keep default PowerChute Instance
  1. Press enter to keep default installation directory
  1. Confirm the installation
  1. This will install the Java Runtime
  1. Type: CD /opt /APC/PowerChute/group1             Enter
  1. Type: sudo ./PCNSConfig.sh
  1. Enter your root Password
  1. Select your UPS Configuration Option
  1. Enter the Management Card IP Address
  1. Select yes at the do you want to Register these settings
  1. Select Yes to starting the Powerchute Network Shutdown Service
  1. You will then be show a configuration

Exchange 2010 Anti-Spam modules install

  1. Run the following command from the C:Program FilesMicrosoftExchange ServerV14Scripts folder.

./install-AntispamAgents.ps1

2. After the script has run, restart the Microsoft Exchange Transport service by running the following command.

Restart-Service MSExchangeTransport

For all anti-spam features to work correctly, you must have at least one IP address of an internal SMTP server set on the InternalSMTPServers parameter on the Set-TransportConfig cmdlet. If the Hub Transport server on which you’re running the anti-spam features is the only SMTP server in your organization, enter the IP address of that computer.

Set-TransportConfig -InternalSMTPServers IP_address

Exchange 2007 Powershell commands

Receive Connector

—————–

Define the FQDN:

Set-ReceiveConnector “<Connector-Name>” –Fqdn:name.company.ca

Set up Anonymous Authentication:

(take a note of the current groups)

Get-ReceiveConnector “<Connector-Name>” | Select PermissionsGroups

(use the current value plus the new value)

Set-ReceiveConnector “<Connector-Name>” –PermissionGroups:<AnonymousUsers, ExchangeUsers, ExchangeServers, ExchangeLegacyServers, Partners>

ADMINISTRATION

++++++++++++++

New Mailbox User:

New-Mailbox -Name “<name>” -Alias <alias> -Database “<mailboxdatabasename>” -UserPrincipalName <alias>@<domain.local> -OrganizationalUnit <domain.local>/OU/OU -Password “<password>”

Move all users from one server to another:

Get-Mailbox -Server SRV1 | Move-Mailbox -TargetDatabase SRV2

INFO

++++

Exchange Organization Name:

Get-OrganizationConfig | select name

STATISTICS

++++++++++

Mailbox Sizes in MB:

Get-MailboxStatistics | Sort-Object TotalItemSize -Descending | ft DisplayName,@{expression={$_.TotalI
temSize.Value.ToMB()}},ItemCount

Find account with SMTP address:

Get-Mailbox | where {$_.emailAddress -contains “emailaddress@domain.com”} | select name

Get-Recipient -identity emailaddress@domain.com

Current connections to Exchange Server:

get-logonstatistics -server <servername> | select username,clientversion

Count of mailboxes per database:

Get-MailboxDatabase | Get-MailboxStatistics | Group-Object -property:database | Sort-Object -property:count | Format-Table count, name -AutoSize

Count of mailboxes per server:

Get-MailboxDatabase | Get-MailboxStatistics | Group-Object -property:serverName | Sort-Object -property:count | Format-Table count, name -AutoSize

Count of mailboxes in entire Exchange Org:

Get-MailboxDatabase | Get-MailboxStatistics | Group-Object

count of mailboxes grouped by Email Address Policy enabled/ disabled:

Get-Mailbox | Group-Object -property:emailaddresspolicyenabled | Sort-Object -property:count | Format-Table count, name -Autosize

CERTIFICATES

++++++++++++

Generate Certificate Request:

New-ExchangeCertificate -GenerateRequest -Path <pathforcsrfilecertname.csr> -KeySize 1024 -SubjectName “c=GB, s=<County>, l=<town>, o=<companyname>, cn=<commoncertname/externalfqdn>” -DomainName <autodiscover.domain.com, servernetbiosname, serverfqdn> -PrivateKeyExportable $True

Import Certificate:

Import-ExchangeCertificate –Path <drive:pathcertfilename.cer>

Find Thumbprint for Imported Certificate:

Dir cert:LocalMachineMy | fl

Bind Certificate to Exchange Services:

Enable-ExchangeCertificate -Thumbprint <thumbprint> –Services “SMTP,IIS”

Get Certificate Status:

Get-ExchangeCertificate

Export Certificate:

$password = Read-Host “Enter Password” -AsSecureString

Export-ExchangeCertificate -Thumbprint <certthumbprint> -Password $password -Path <pathtoexportcert.pfx>

Import PFX Certificate with Public Key:

Import-ExchangeCertificate -Path c:certificatesimport.pfx -Password:(Get-Credential).password

[Anything can be entered in username, enter public key in password]

SCR (Standby Continuous Replication)

++++++++++++++++++++++++++++++++++++

Enable SCR on Storage Group:

enable-storagegroupcopy -identity <storagegroupGUID> -standbymachine exch1b -ReplayLagTime 0.1:0:0

Disable SCR on Storage Group:

disable-storagegroupcopy -identity <storagegroupGUID> -standbymachine exch1b

Get SCR Status of Storage Group:

get-storagegroupcopystatus -identity <storagegroupGUID> -standbymachine exch1b

CCR/LCR

+++++++

Suspend replication:

suspend-storagegroupcopy -identity <clusternamestoragegroupname>

Resume replication:

Resume-storagegroupcopy -identity <clusternamestoragegroupname>

Reseed passive node:

– suspend-storagegroupcopy -identity <clusternamestoragegroupname>

– Remove all database, transaction log and checkpoint files from passive node

– Update-StorageGroupCopy <clusternamestoragegroupname>

– Get-Storage
GroupCopyStatus (to check replication after copy resumed)

CLUSTER

+++++++

Which node is the active/ passive node:

Get-ClusteredMailboxServerStatus -Identity <ClusteredMailboxServerName>

ClusteredStorageType for all Mailbox Servers (Shared-SCC, Non-Shared-CCR, None-Mailbox)

Get-MailboxServer

Manually Switch Active Node:

Move-ClusteredMailboxServer -Identity:<ClusteredMailboxServerName> -TargetMachine:<NodeName> -MoveComment:”<comment>”

NLB/ CAS/ HUB

+++++++++++++

Check which users logged onto which CAS server:

Get-LogonStatistics -Server <CASServerName>

Enable Outlook Anywhere:

Enable-OutlookAnywhere -Server <CASServerName> -SSLOffloading:$false -ExternalHostname <externalFQDN> -ClientAuthenticationMethod basic -IISAuthenticationMethods basic

ADMINISTRATION

++++++++++++++

Specify -Password option as secure string:

$password = Read-Host “Enter password” -AsSecureString

Create New Mailbox:

New-Mailbox -Name “<name>” -Database “<First Storage GroupMailbox Database>” -OrganizationalUnit domain.local/OU/OU -Alias <alias> -UserPricipalName <user>@<domain.local> -FirstName Chris -LastName Ashton -DisplayName “Chris Ashton” -Password $password

STATISTICS/ INFO

++++++++++++++++

Check if mailbox still in dumpster (mailbox retention period expired):

Get-MailboxStatistics | where { $_.DisconnectDate -ne $null } | select DisplayName,DisconnectDate

MONITORING/ TROUBLESHOOTING

+++++++++++++++++++++++++++

Test Outlook Anywhere:

Test-WebServicesConnectivity | fl

Test OWA:

Test-OwaConnectivity | fl

Test AutoDiscover:

Test-OutlookWebServices | fl

The counter Processor%Processor Time should not be consistently over 75%. Although spikes will regularly occur, if the processor is being so heavily utilized, you should consider upgrading. On an Edge Transport server, this can happen if Forefront for Exchange is deployed and the antivirus scanning engines are taking up too many of the processor’s cycles. On a server that is not under duress, the Logical DiskQueue Length counter should be 4 or lower, MemoryPages/Sec should not be regularly higher than 10, and Network InterfaceOutput Queue Length should not be higher than 5.

Mailbox servers rely heavily on the disk subsystem. On a server that is coping well with its load, the Logical DiskQueue Length counter should be 4 or lower. A rating of 15 would suggest that the volume is being used so heavily that the entire server is crawling to a halt. In this situation, you’d want to move some mailboxes off this server or put in faster disks, such as those in a striped volume. On a server that is not under duress, the counter Processor%Processor Time should not be consistently over 75%, MemoryPages/Sec should not be regularly higher than 10, and Network InterfaceOutput Queue Length should not be higher than 5.

Exchange 2003 to 2010 Transition guide

This is a guide for transitioning Exchange 2003 to 2010 in the same domain.

n.b (An Exchange migration is from one Active Directory forest to a different Active Directory forest.)

  1. Bring the Exchange organization to Exchange Native Mode.
  2. Upgrade all Exchange 2003 Servers to Exchange Server 2003 Service Pack 2.
  3. Bring the AD forest and domains to Windows Server 2003 Functional (or higher) levels.
  4. Upgrade at least one Global Catalog domain controller in each AD Site that will house Exchange Server to Windows Server 2003 SP2 or greater.
  5. Run ServerManagerCmd -i NET-Framework from 2008 R2 server.
  6. (only run if setup is not run by Schema, Enterprise and Domain Admin)Prepare a Windows Server 2008 (RTM or R2) x64 edition server for the first Exchange 2010 server.
  7. Install the AD LDIFDE tools on the new Exchange 2010 server (to upgrade the schema). ServerManagerCmd -i RSAT-ADDS
  8. Install Web Server role on the CAS server and any necessary prerequisites. (If additional Exchange servers are on the network)
  9. Set the Net.Tcp Port Sharing service to automatic
  10. Install the Office 2007 converter filter Pack http://go.microsoft.com/fwlink/?LinkId=123380
  11. Run setup on the Exchange 2010 server, upgrade the schema, and prepare the forest and domains. (Setup runs all in one step or separate at the command line.)
  12. Install CAS server role servers and configure per 2010 design. (If required)
  13. Install Mailbox servers and configure Databases (DAG if needed)
  14. Install Hub Transport role and configure per 2010 design.
  15. Create public folder replicas on Exchange 2010 servers using shell. Run cd <Exchange Installation Path>Scripts then run.AddReplicatoPFRecursive.ps1 -TopPublicFolder -ServerToAdd Servername in Exchange Shell, or Exchange 2010 Public Folder tool.
  16. Transfer inbound and outbound mail traffic to the HT servers.
  17. Rehome the Offline Address Book (OAB) generation server to Exchange Server 2010.
  18. Transfer OWA, ActiveSync, and Outlook Anywhere traffic to new CAS servers.
  19. Move mailboxes to Exchange Server 2010 using Move Mailbox Wizard or Powershell.

    Indivdual users:New-MoveRequest -Identity “someuser@corp.local” -DomainController DC02 -TargetDatabase “Mailbox Database 01”

    Entire mailboxes: .MoveMailbox.ps1 -MailboxDatabase “SRV-01First Storage GroupMailbox Store (SRV-01)” -TargetDatabase “Mailbox Database” (note: ps1 scripts must be run from ExchsvrScripts location in Management Shell)

  20. Run Get-MoveRequest –MoveStatus Completed | Remove-MoveRequest to remove Move requests otherwise mailboxes can’t be moved again.
  21. Update Email address policy to Exchange 2010

    Get-EmailAddressPolicy | where {$_.RecipientFilterType –eq “Legacy”} | Set-EmailAddressPolicy –IncludedRecipients AllRecipients

  22. Update Address Lists

Set-AddressList “All Users” -IncludedRecipients MailboxUsers

Set-AddressList “All Groups” -IncludedRecipients MailGroups

Set-AddressList “All Contacts” -IncludedRecipients MailContacts

Set-AddressList “Public Folders” -RecipientFilter { RecipientType -eq ‘PublicFolder’ }

Set-GlobalAddressList “Default Global Address List” -RecipientFilter {(Alias -ne $null -and (ObjectClass -eq ‘user’ -or ObjectClass -eq ‘contact’ -or ObjectClass -eq ‘msExchSystemMailbox’ -or ObjectClass -eq ‘msExchDynamicDistributionList’ -or ObjectClass -eq ‘group’ -or ObjectClass -eq ‘publicFolder’))}

  1. Rehome Public Folder Hierarchy on new Exchange Server 2010 Admin Group.
  2. Transfer all Public Folder Replicas to Exchange Server 2010 Public folder store(s).
  3. Delete Public and Private Information Stores from Exchange 2003 server(s).
  4. Delete Routing Group Connectors to Exchange Server 2003.

    Get-RoutingGroupConnector | Remove-RoutingGroupConnector

  5. Delete Recipient Update Service agreements using ADSIEdit

    “CN=Recipient Update Services,CN=Address Lists Container,CN=Commonname, CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain,DC=local”

  6. Uninstall all Exchange 2003 servers through add/remove programs