Citrix XenServer 7 Why Should I Upgrade?

Citrix XenServer has been my favorite Type 1 Hypervisor platform for the past year now for various reasons, mostly that it’s closer to VMWare than any other hypervisor and doesn’t come at the premium price tag of VMWare, and its control domain runs Centos and features a highly robust intuitive CLI. I’ve been watching XenServer 7 (codename Dundee) with much excitement from the time it went into public Alpha and I’m happy to announce it arrived as a full release on May 24th. Having fully upgraded my own Citrix pool at home and built one in the office at work I wanted to outline what’s new in XenServer 7 and some gotchas before my next blog post which will detail the upgrade process.


New Stuff!

We all like new and shiny, here’s a shortlist of new features and major improvements:

  • Improved graphics: it’s not surprise that we see performance improvements in this area as Citrix has been the market leader in VDI for sometime now.
  • Configuration Maximum increases: as with any new release of a hypervisor configuration maximums have increased, details below:
    • Hosts support up to 5TB RAM
    • Host supports up to 288 CPUs
    • Hosts can support up to  4096 Storaage Repositories
    • Guest VMs can now support up to 1.5TB of RAM
    • Guest VMs support up to 32 vCPUs
  • Docker Support: while this rolled out in 6.5 SP1 docker support now includes Docker containers running in Server 2016
  • Automated Windows VM XenTool Management
    • This was a major selling feature on upgrading however it should be noted that this only for new VMs created in XenServer 7 and does not include upgraded VMs.
    • Also of note is the process to upgrade XenTools is doggedly slow and the Guest VM runs terribly until the install finishes and there’s been a few reboot cycles. Once I/O is finally optimized performance is better than it was previously, however getting there can be painful and you may run into some issues with static IPs disappearing or the network adapter showing as a completely new adapter (which caused some headaches with Windows DHCP Server).
  • Support for SMB for VM disks (I haven’t personally used this feature as of yet)
  • SSH Console: We’ve all been familiar with the RDP prompt when using the console on a windows guest. The console now sports an Open SSH button for linux VMs that launches a Putty SSH session.
  • Dom0 Improvements: This is what we’ve all been waiting for. Dom0 is now substantially larger at 18GB, so no more worries about running out of space from logs or patching (at least not nearly as quickly)
    • The second notable improvement is the use of cgroups which help to keep a heavily loaded host from being unmanageable.
  • Server Health Check: The nagging prompt when first connecting to a new XenServer asking if you want to collect health reports and send them to Citrix. Unless you’re using a paid license this is a mostly useless feature, however I can see the benefit if you are paying for support.
  • THE OS: XenServer 7 now runs on CentOS 7. I was simply shocked when 6.5 came out to find that it was still running on the old trust CentOS 5. While I see this as a great step forward, be advised if you are using any scripts that rely heavily on sysvinit functions that this release now uses systemd and some tweaking to your scripts and automation tools may be required.


What Still Hasn’t Changed

  • Mounting iSO SRs from SMB v3.0 requires the NTLanManager local settings to be set to “Allow LM & NTLM and NTLMv2 if negotioated” on the Windows server hosting the share
  • Active Directory Integration still requires the above changes as well.

The Verdict

Other than the disappointment around the Windows XenTools mentioned above this was a very nice release and packs a lot of great new features and iterates successfully on what has made Citrix XenServer a great hypervisor. I honestly think if it were not for lack of marketing muscle, slow release cycles, and lack of partner integrations Citrix would be a bigger player in this space. Holding a Vmware VCA, Citrix CCA, and the MS Virtualization certifications I can say hands down that for 90% of my use cases Citrix XenServer is my go to hypervisor (unless I’m running 100% Linux, then KVM it is, or if a company has deep pockets and can afford the premium VMWare licensing). Stay tuned I’ll have an upgrade walkthrough coming in the next few weeks.


Setting Up Hyper-V Replication


Hyper-V Replication makes for a great built in feature to sync mission critical VMs to another host for warm standby. This feature allows for quick easy failover and takes minimal effort to setup. While this is intended to allow for failover and failback I have also utilized this replication methodology to migrate a 7 hosts worth of VMs from a datacenter on one side of the country to another. Generally speaking this process works best in a domain environment where both the source and replica server are joined to the same domain. You will also need to configure vSwitches on the replica server to match those on the source server, or ensure you reconfigure the vSwitches associated with the NICs on individual VMs.

Enabling Replication

The first step to setting up replication is enabling the destination server (replica server) to receive replicas and ensure that the proper firewall ports are opened on both the Windows firewall and any firewall appliance that sits between the source Hyper-V server and the replica server. From the Hyper-V MMC choose Hyper-V settings on the right. Once the dialog Window appears click Replication Configuration, then check boxes to enable this computer as a replica server using Kerberos and HTTP.


Next we will need to ensure that our inbound rules allow for inbound replication. From control panel open Windows Firewall with Advanced Security and allow the following rules (if not already enabled)


Enabling Replication on VMs

enable replication

Now that we have setup the replica server we can proceed to the source Hyper-V host and setup VM replication. To enable VM replication right click the VM and choose enabled replication. At this point you will get a enable replication wizard, proceed through the wizard entering the hostname of the replica server, choosing to HTTP port 80 with Kerberos, choose to sync all VHDs or only specific ones, set replication frequency, and choose the initial replication method. Once this occurs your initial replica will begin to sync. Once the initial replication is done the VM will then sync deltas at the replication window you specified in the wizard to enable replication. From this point you can keep tabs on the replication in the replication tab at the bottom of the hyper-v MMC window for the VM you have highlighted.


Planned Failover

To initiate a planned failover event (either for testing, DR needs, or migration) you will want to make sure a few things are in place. Setup the Hyper-V source server as a replica server following the steps above, this step is needed if you will need to failback to the source server. Next shutdown the VM you are failing over, then right click the VM, choose replication, and then planned failover.

planned failover

Next you will see a planned failover screen, click the failover button to begin failing over.

planned failover2

If you are performing a one way migration you can safely ignore warnings about failback, once the last replication window is complete after shutdown, you can disable replication from the replica side and start the VM.


Reverse Replication

To failback from the replica server to the source Hyper-V host, you can shutdown the VM, then once shutdown you can right click the VM on the replica server, choose replication then reverse replication and proceed through the prompts. This will perform a last data sync before failing back so any changes made since the failover event will be sync’d back to your production server.



Hyper-V replication is a fantastic tool that is super simple to setup and makes setting up a warm standby painless. A few notes of caution, the first being that replication is not a suitable replacement for backups. Replication is good for DR in the sense that your RTO will be low with minimal data loss, however this doesn’t change the need to be able to retrieve old files, or restore data in the event of a catastrophic server failure or attack, as these changes are quickly replicated from the source server to the replica server. The second note of caution is that if you are using Microsoft Failover Clustering with Hyper-V you will need to setup a Hyper-V Replica Broker as a failover cluster role (only one needed per cluster) in order to act on behalf of the cluster to replicate VMs and in and out of the failover cluster. There are some caveats to this that I will document in a future blog post, however despite the extra steps needed for this it is still a fairly painless process and Hyper-V clusters can replicate to standalone Hyper-V hosts and vice versa.

Setting Up a Hyper-V Cluster

If you’ve ever worked with Hyper-V you’ll recognize it’s a fairly simple and straight forward virtualization platform, however things can get a bit sticky when it comes to clustering. I have worked with well deployed Hyper-V clusters and also dealt with those that were setup incorrectly. As the popularity of Hyper-V grows I wanted to create a quick overview of what it takes to deploy a Hyper-V failover cluster.


Join to the Domain

Once you have completed your windows installation join each server to the domain using the PowerShell example below where <domain\user> is your domain user account ex: richsitblog\rstaats and where <domainname> is your domain name ex:

Add-Computer -DomainCredential domain\user -DomainName domainname


Installing Roles

Once you server has completed reboot you will want to install roles across all the servers in the cluster you are standing up. To do this substitute vmh## in the example below with your host names.

Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Install-WindowsFeature Hyper-V, Multipath-IO, Failover-Clustering -IncludeManagementTools -IncludeAllSubFeature}
Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Restart-Computer}


Configuring NIC Teaming (Server)

You will need to ensure you have OOB/LOM or direct console access to the machine as these changes will cause interruption in network service


Assuming you have named your NICs for the front end network NIC1 and NIC2 use the following PowerShell statement to build your team, note if you are not using LACP you can instead use a different teaming mode such as “Switch Independent” or “Static Teaming”. Additional LoadBalancingAlgorithm options include “Dynamic” and “Address Hash”

Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {New-NetLbfoTeam -Name Team1 -TeamMembers NIC1,NIC2 -TeamingMode LACP -LoadBalancingAlgorithm HyperVPorts}

Once your team is successfully built you will need to configure the IP addressing information for the team


Configuring NIC Teaming (Switch)

In this example I am assuming a LACP channel group is already setup on the switch side. I am also assuming you are utilizing a Cisco IOS or NXOS device. I am using the following example VLANS (100 – Prod, 200 – Dev, 300 – Test). Please check with your network admin and ensure you have all correct info before making any changes. For the sake of simplicity for this example I am assuming the port channels are 101-104 for these VM Hosts:

conf term
int po 101-104
switchport mode trunk
switchport trunk native vlan 100
switchport trunk allowed vlans 100,200,300
copy run start

If using 2n network architecture you will need to perform this actions on both sides of the switch pair.


Configuring vSwitch

Next we will configure a vSwitch using our newly created NIC Team and allow it to share the adapter with the management OS


New-VMSwitch -Name "vSwitch" -NetAdapterName Team1 -AllowManagementOS $true


Configuring Storage Network (Server)

Best practice when using iSCSI for shared Storage is to setup your interfaces as individual discreetly IP’d interfaces to allow for maximum queues and throughput. Configure your adapters accordingly for your storage network and ensure that you have have connectivity and if you are utilizing jumbo frames that they are enabled on the NICs within the OS, on the switch, and on the storage appliance.


Configuring Storage Network (Switch)

Assuming again you are using Cisco networking gear and have these interfaces setup as individual ports we can set these to an access VLAN. For the sake of simplicity my example will use ports Eth 101-108 (accounting for 2 ports per server) and assumes they are on a single storage VLAN (in this example VLAN 400 – Storage)

conf term
int Eth101-108
switchport access vlan 400
copy run start


Configuring MPIO Settings

Before we begin building the cluster and adding LUNs lets go ahead and get the MPIO configurations out of the way. Again we will kill 4 (or more) birds with one stone using the Invoke-Command to execute across multiple systems:

Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Enable-MSDSMAutomaticClaim -BusType iSCSI}
Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Set-MSDSMGlobalLoadBalancePolicy -Policy LQD}


Choices for the MSDSM Global Loadbalance Policy include:

Lease Queue Depth LQD
Round Robin RR
Fail Over Only FOO
Least Blocks LB

Connecting an ISCSI LUN:

Open Server manager and click “tools” on the right hand side and choose iSCSI initiator

If this is the first time you have opened the iSCSI initiator you will see the following prompt, click yes.



The entry of initiator and target secrets only applies if your storage connection requires CHAP or mutual CHAP auth

If you are using mutual CHAP authentication you will first need to configure the initiator CHAP secret, this can be accomplished by clicking the configuration tab of the iSCSI Initiator properties, and choosing the CHAP button where you can enter in the initiator secret.


Once this is completed click Apply, then choose the Discovery tab and click the Discover Portal Button


When prompted click the advanced button


click the advanced button and tick the Enable Multi-path option and click OK.

Once You have done this enter the target portal IP and check the box for “Enable CHAP log on” and enter the information as needed. If you are performing mutual CHAP auth check the box to “perform mutual authentication”.


Select the local adapter as the default Microsoft ISCSI

For initiator IP choose the IP of your first storage interface in the dropdown

For target portal IP select the IP of the storage device.

Connecting to Targets

If you have not already done so click the Discovery tab and click the discover portal button and enter the IP address of the iSCSI target:


Click OK, then move to the Targets tab where you will see all available iSCSI targets listed, select the target and click connect (note if you are using mutual CHAP auth you may be prompted again for target secret to connect)


Once you have established a connection and see the LUN as connected, hihgligh the connected LUN and click properties. From Here you will see the following screen:


Click the add session button and go through the same process as above, however this time you will select your second storage NIC’s IP as the Initiator IP.


Standing Up the Failover Cluster

Open the Failover Cluster Manager on one of the hosts. Inside the MMC right click failover cluster manager and choose create cluster


You should see a wizard pop up on your screen, click next, on the “select servers” screen enter the hostnames of your servers as shown in the example below:


On the Validation Screen choose to run cluster validation. Once this has completed (this can take anywhere from 10-60 minutes in my experience depending on the amount of resources being validated. Next enter a cluster name when prompted. On the confirmation screen go ahead and proceed with the box checked to add eligible storage. Once the cluster is built if it is not given an IP through DHCP you will be prompted to create one. This can be changed after the fact as well. At this point we can proceed to add disks to the cluster and configure quorum.

Adding Disks

From a single host in the cluster open disk management and bring your iscsi cluster disks online by right clicking them and choosing online


Once you have done this you will need to initialize the disk


Right click the raw disk and write an NTFS partition to it. Please note all disks should be using GPT for partition table type when prompted.


At this point we can add the disks to the cluster. Open Failover cluster manager, expand the cluster name and expand the storage node and click on disks. From hear click add disk:


Once you click add disks all cluster eligible storage will appear, leave them checked and click OK


Once this is done right click each disk (except the disk you have allocated for quorum and choose to add to cluster shared volumes


At this point on each of the cluster nodes you should be able to see the cluster volumes under C:\ClusterStorage\Volume#

If a disk fails to become a cluster shared volume ensure you have written a partition to it.


Setting Cluster Quorum

Cluster quorum can be set to use node majority, disk witness, or file share witness. I personally prefer node majority with a disk witness. To setup quorum options right click the cluster, choose more actions, then configure cluster quorum.


Choose the following “Select the Quorum Witness”


click next then choose “Configure Disk Witness”


At this point we can select our disk for cluster disk witness, and continue to proceed through the end of the wizard:


Setting Live Migration Settings

On the left hand side of the failover cluster manager console choose networks. Then on the right hand side select live migration settings:


In the window that appears you can uncheck networks you don’t want to be used for live migrations and then prioritize the networks you wish to use.


Final Cluster Validation

At this point we are clear to run a final cluster validation. Right click the cluster name and choose validate cluster, then run through the full cluster validation wizard. This will take a substantial amount of time to run. Pay close attention to any errors or warnings that occur.


Adding Virtual Machines to Cluster

Right click “Roles” underneath the cluster name and select configure role. Then choose virtual machine and click next, at this point you will get a list of all VMs that are eligible to be added to the cluster, check the boxes accordingly and proceed through the wizard to add VMs.


Please make sure your VMs are storage migrated to cluster storage prior to adding them to the failover cluster.