Uploading Hyper-V VHDs to Azure

I recently had a project I was working on in which some inherited Azure VMs were missing the Azure agent and nobody knew the passwords for them. After a quick support call to MS it became apparent I would have to delete the VMs and preserve the disks, download disks, load into hyper-v and manually reset the password. In doing this I learned a few gotchas such as not being able to convert the VHD into a bootable Azure disk unless uploading through the Azure Powershell CLI. Here’s my quick how to upload disks guide.

Installing Azure Powershell

First we will need to install the Azure Powershell Module. This can be accomplished by running Powershell as administrator and entering the following:

Install-Module Azure

When prompted press choose A for yes to all. 

Note: If there is any conflict you may need to add the -AllowClobber to the end of the command above.

 

Login To Azure and Get Publish Settings File

First you will need to log into Azure by entering the following in Powershell

Add-AzureAccount

At this point you will be prompted to log into Azure. 

 

The next step will be to get an Azure Publishing Settings file. You can do this by entering the cmdlet below and then importing the file with the following cmdlet:

Get-AzurePublishSettingsFile

Import-AzurePublishSettingsFile -PublishSettingsFile “<path to file>”

 

Select Your Subscription and view Storage Accounts

At this step we will choose which subscription to use (if you have more than one) and list storage accounts so that we know where to upload the disks to. 

Warning: If you attempt to upload the VHD through the web GUI instead of using this method it will be created as a block blob not a page blob which prevents you from being able to convert it to a bootable disk for use in the gallery. The only way to do this correctly at the time of this writing is through the Powershell API.

Get-AzureSubscription

Select-AzureSubscription -SubscriptionId <enter yours here>

Get-AzureStorageAccount 

 

Uploading The Azure VHD and Converting It

At this point we are setup for the part we’ve all been waiting for. Make sure your VHD is not thin provisioned and that the VM has the Azure Agent installed and has been sys prepped (if using as a template).

Add-AzureVhd -LocalFilePath “<file path to your VHD>” -Destination “<URL of storage location with your filename after the last />”

This will create an MD5 hash and upload the disk. 

 

To convert the disk we will want to run the following:

Add-AzureDisk -Diskname ‘<name your disk something relevant>’ -MediaLocation ‘<URL where your disk lives in azure storage>’ -Label ‘<label>’ -OS <Choose Windows or Linux>

Setting Up Hyper-V Replication

Overview

Hyper-V Replication makes for a great built in feature to sync mission critical VMs to another host for warm standby. This feature allows for quick easy failover and takes minimal effort to setup. While this is intended to allow for failover and failback I have also utilized this replication methodology to migrate a 7 hosts worth of VMs from a datacenter on one side of the country to another. Generally speaking this process works best in a domain environment where both the source and replica server are joined to the same domain. You will also need to configure vSwitches on the replica server to match those on the source server, or ensure you reconfigure the vSwitches associated with the NICs on individual VMs.

Enabling Replication

The first step to setting up replication is enabling the destination server (replica server) to receive replicas and ensure that the proper firewall ports are opened on both the Windows firewall and any firewall appliance that sits between the source Hyper-V server and the replica server. From the Hyper-V MMC choose Hyper-V settings on the right. Once the dialog Window appears click Replication Configuration, then check boxes to enable this computer as a replica server using Kerberos and HTTP.

enablereplica

Next we will need to ensure that our inbound rules allow for inbound replication. From control panel open Windows Firewall with Advanced Security and allow the following rules (if not already enabled)

firewall

Enabling Replication on VMs

enable replication

Now that we have setup the replica server we can proceed to the source Hyper-V host and setup VM replication. To enable VM replication right click the VM and choose enabled replication. At this point you will get a enable replication wizard, proceed through the wizard entering the hostname of the replica server, choosing to HTTP port 80 with Kerberos, choose to sync all VHDs or only specific ones, set replication frequency, and choose the initial replication method. Once this occurs your initial replica will begin to sync. Once the initial replication is done the VM will then sync deltas at the replication window you specified in the wizard to enable replication. From this point you can keep tabs on the replication in the replication tab at the bottom of the hyper-v MMC window for the VM you have highlighted.

 

Planned Failover

To initiate a planned failover event (either for testing, DR needs, or migration) you will want to make sure a few things are in place. Setup the Hyper-V source server as a replica server following the steps above, this step is needed if you will need to failback to the source server. Next shutdown the VM you are failing over, then right click the VM, choose replication, and then planned failover.

planned failover

Next you will see a planned failover screen, click the failover button to begin failing over.

planned failover2

If you are performing a one way migration you can safely ignore warnings about failback, once the last replication window is complete after shutdown, you can disable replication from the replica side and start the VM.

 

Reverse Replication

To failback from the replica server to the source Hyper-V host, you can shutdown the VM, then once shutdown you can right click the VM on the replica server, choose replication then reverse replication and proceed through the prompts. This will perform a last data sync before failing back so any changes made since the failover event will be sync’d back to your production server.

4718.U3_6B177F2C

Summary

Hyper-V replication is a fantastic tool that is super simple to setup and makes setting up a warm standby painless. A few notes of caution, the first being that replication is not a suitable replacement for backups. Replication is good for DR in the sense that your RTO will be low with minimal data loss, however this doesn’t change the need to be able to retrieve old files, or restore data in the event of a catastrophic server failure or attack, as these changes are quickly replicated from the source server to the replica server. The second note of caution is that if you are using Microsoft Failover Clustering with Hyper-V you will need to setup a Hyper-V Replica Broker as a failover cluster role (only one needed per cluster) in order to act on behalf of the cluster to replicate VMs and in and out of the failover cluster. There are some caveats to this that I will document in a future blog post, however despite the extra steps needed for this it is still a fairly painless process and Hyper-V clusters can replicate to standalone Hyper-V hosts and vice versa.

Setting Up a Hyper-V Cluster

If you’ve ever worked with Hyper-V you’ll recognize it’s a fairly simple and straight forward virtualization platform, however things can get a bit sticky when it comes to clustering. I have worked with well deployed Hyper-V clusters and also dealt with those that were setup incorrectly. As the popularity of Hyper-V grows I wanted to create a quick overview of what it takes to deploy a Hyper-V failover cluster.

 

Join to the Domain

Once you have completed your windows installation join each server to the domain using the PowerShell example below where <domain\user> is your domain user account ex: richsitblog\rstaats and where <domainname> is your domain name ex: richsitblog.com

Add-Computer -DomainCredential domain\user -DomainName domainname
Restart-Computer

 

Installing Roles

Once you server has completed reboot you will want to install roles across all the servers in the cluster you are standing up. To do this substitute vmh## in the example below with your host names.

Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Install-WindowsFeature Hyper-V, Multipath-IO, Failover-Clustering -IncludeManagementTools -IncludeAllSubFeature}
Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Restart-Computer}

 

Configuring NIC Teaming (Server)

You will need to ensure you have OOB/LOM or direct console access to the machine as these changes will cause interruption in network service

 

Assuming you have named your NICs for the front end network NIC1 and NIC2 use the following PowerShell statement to build your team, note if you are not using LACP you can instead use a different teaming mode such as “Switch Independent” or “Static Teaming”. Additional LoadBalancingAlgorithm options include “Dynamic” and “Address Hash”

Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {New-NetLbfoTeam -Name Team1 -TeamMembers NIC1,NIC2 -TeamingMode LACP -LoadBalancingAlgorithm HyperVPorts}

Once your team is successfully built you will need to configure the IP addressing information for the team

 

Configuring NIC Teaming (Switch)

In this example I am assuming a LACP channel group is already setup on the switch side. I am also assuming you are utilizing a Cisco IOS or NXOS device. I am using the following example VLANS (100 – Prod, 200 – Dev, 300 – Test). Please check with your network admin and ensure you have all correct info before making any changes. For the sake of simplicity for this example I am assuming the port channels are 101-104 for these VM Hosts:

conf term
int po 101-104
switchport mode trunk
switchport trunk native vlan 100
switchport trunk allowed vlans 100,200,300
end
copy run start

If using 2n network architecture you will need to perform this actions on both sides of the switch pair.

 

Configuring vSwitch

Next we will configure a vSwitch using our newly created NIC Team and allow it to share the adapter with the management OS

 

New-VMSwitch -Name "vSwitch" -NetAdapterName Team1 -AllowManagementOS $true

 

Configuring Storage Network (Server)

Best practice when using iSCSI for shared Storage is to setup your interfaces as individual discreetly IP’d interfaces to allow for maximum queues and throughput. Configure your adapters accordingly for your storage network and ensure that you have have connectivity and if you are utilizing jumbo frames that they are enabled on the NICs within the OS, on the switch, and on the storage appliance.

 

Configuring Storage Network (Switch)

Assuming again you are using Cisco networking gear and have these interfaces setup as individual ports we can set these to an access VLAN. For the sake of simplicity my example will use ports Eth 101-108 (accounting for 2 ports per server) and assumes they are on a single storage VLAN (in this example VLAN 400 – Storage)

conf term
int Eth101-108
switchport access vlan 400
end
copy run start

 

Configuring MPIO Settings

Before we begin building the cluster and adding LUNs lets go ahead and get the MPIO configurations out of the way. Again we will kill 4 (or more) birds with one stone using the Invoke-Command to execute across multiple systems:

Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Enable-MSDSMAutomaticClaim -BusType iSCSI}
Invoke-Command -ComputerName vmh01,vmh02,vmh03,vmh04 -ScriptBlock {Set-MSDSMGlobalLoadBalancePolicy -Policy LQD}

 

Choices for the MSDSM Global Loadbalance Policy include:

Name
Cmdlet
Lease Queue Depth LQD
Round Robin RR
Fail Over Only FOO
Least Blocks LB

Connecting an ISCSI LUN:

Open Server manager and click “tools” on the right hand side and choose iSCSI initiator

If this is the first time you have opened the iSCSI initiator you will see the following prompt, click yes.

1

CHAP Auth

The entry of initiator and target secrets only applies if your storage connection requires CHAP or mutual CHAP auth

If you are using mutual CHAP authentication you will first need to configure the initiator CHAP secret, this can be accomplished by clicking the configuration tab of the iSCSI Initiator properties, and choosing the CHAP button where you can enter in the initiator secret.

2

Once this is completed click Apply, then choose the Discovery tab and click the Discover Portal Button

3

When prompted click the advanced button

4

click the advanced button and tick the Enable Multi-path option and click OK.

Once You have done this enter the target portal IP and check the box for “Enable CHAP log on” and enter the information as needed. If you are performing mutual CHAP auth check the box to “perform mutual authentication”.

5

Select the local adapter as the default Microsoft ISCSI

For initiator IP choose the IP of your first storage interface in the dropdown

For target portal IP select the IP of the storage device.

Connecting to Targets

If you have not already done so click the Discovery tab and click the discover portal button and enter the IP address of the iSCSI target:

6

Click OK, then move to the Targets tab where you will see all available iSCSI targets listed, select the target and click connect (note if you are using mutual CHAP auth you may be prompted again for target secret to connect)

7

Once you have established a connection and see the LUN as connected, hihgligh the connected LUN and click properties. From Here you will see the following screen:

8

Click the add session button and go through the same process as above, however this time you will select your second storage NIC’s IP as the Initiator IP.

 

Standing Up the Failover Cluster

Open the Failover Cluster Manager on one of the hosts. Inside the MMC right click failover cluster manager and choose create cluster

9

You should see a wizard pop up on your screen, click next, on the “select servers” screen enter the hostnames of your servers as shown in the example below:

10

On the Validation Screen choose to run cluster validation. Once this has completed (this can take anywhere from 10-60 minutes in my experience depending on the amount of resources being validated. Next enter a cluster name when prompted. On the confirmation screen go ahead and proceed with the box checked to add eligible storage. Once the cluster is built if it is not given an IP through DHCP you will be prompted to create one. This can be changed after the fact as well. At this point we can proceed to add disks to the cluster and configure quorum.

Adding Disks

From a single host in the cluster open disk management and bring your iscsi cluster disks online by right clicking them and choosing online

11

Once you have done this you will need to initialize the disk

12

Right click the raw disk and write an NTFS partition to it. Please note all disks should be using GPT for partition table type when prompted.

13

At this point we can add the disks to the cluster. Open Failover cluster manager, expand the cluster name and expand the storage node and click on disks. From hear click add disk:

14

Once you click add disks all cluster eligible storage will appear, leave them checked and click OK

15

Once this is done right click each disk (except the disk you have allocated for quorum and choose to add to cluster shared volumes

16

At this point on each of the cluster nodes you should be able to see the cluster volumes under C:\ClusterStorage\Volume#

If a disk fails to become a cluster shared volume ensure you have written a partition to it.

 

Setting Cluster Quorum

Cluster quorum can be set to use node majority, disk witness, or file share witness. I personally prefer node majority with a disk witness. To setup quorum options right click the cluster, choose more actions, then configure cluster quorum.

17

Choose the following “Select the Quorum Witness”

18

click next then choose “Configure Disk Witness”

19

At this point we can select our disk for cluster disk witness, and continue to proceed through the end of the wizard:

20

Setting Live Migration Settings

On the left hand side of the failover cluster manager console choose networks. Then on the right hand side select live migration settings:

21

In the window that appears you can uncheck networks you don’t want to be used for live migrations and then prioritize the networks you wish to use.

 

Final Cluster Validation

At this point we are clear to run a final cluster validation. Right click the cluster name and choose validate cluster, then run through the full cluster validation wizard. This will take a substantial amount of time to run. Pay close attention to any errors or warnings that occur.

 

Adding Virtual Machines to Cluster

Right click “Roles” underneath the cluster name and select configure role. Then choose virtual machine and click next, at this point you will get a list of all VMs that are eligible to be added to the cluster, check the boxes accordingly and proceed through the wizard to add VMs.

22

Please make sure your VMs are storage migrated to cluster storage prior to adding them to the failover cluster.