Microsoft Azure Site Recovery: Between an on-premises VMM site and Azure – Part One

I have recently seen an increase in the interest from clients wishing to invest in DR and Backup to the cloud. Azure Site Recovery Services has many offerings for various scenarios, with additional options appearing all the time. I have discussed Azure Backup Vault in previous posts and no doubt will touch on it again in the future, but today I have chosen to look at DR and Azure Site Recovery.

So what is ASR?

‘Azure Site Recovery helps you to protect important applications by coordinating the replication and recovery of physical or virtual machines. You can replicate to your own datacenter, to a hosting service provider, or even to Azure to avoid the expense and complexity of building and managing your own secondary location.’  Microsoft

Essentially, Microsoft Azure Site Recovery offers the ability to simplify and automate DR between two on-premise site or between on-premise and Azure.

As I write this the options are:


Microsoft are developing ASR rapidly and additional functionality is always appearing, with much to come in Q1 and Q2 of 2015.

For this particular post I will be focusing on the Between an on-premises VMM site and Azure option.


  • An Azure Subscription
  • System Center Virtual Machine Manager 2012 R2
  • Windows Server 2012 R2 Hyper-V – used as VM host
  • Fixed disk .VHD or .VHDX (Generation 1 only VMs)
  • Guest OS Windows Server 2008 or later or Linux: Centos, openSUSE, Ubuntu

More details on requirements and planning are located here:

Setting Up the Azure Site Recovery Vault:

The first step to configuring DR between an on-premises VMM site and Azure is to create a Site Recovery Vault. To do this, open the Azure Portal and select the Recovery Services tab on the left menu.


Next click + NEW  at the bottom of the screen which opens the window required to create a Site Recovery Vault and a Backup Vault. Select Site Recovery Vault, then give the vault a name and select the region where the data should be stored.

A full list of Regions can be found at


After the job has completed, the new Site Recovery Vault appears as Active.


Configuring the Hyper-V and VMM servers:

Now the ASR Vault has been created, the next step is to configure the local Hyper-V and VMM servers. Click on the new ASR Vault to open its Dashboard.


Download the ASR Provider and Registration key. Install the ASR Provider on the VMM server and register against the ASR Vault by using the registration key.

NOTE: If you are running VMM in HA, the first step is to install the ASR provider on the active node, then register the server. Then secondly install the ASR provider on the passive node.


Return back to the Dashboard and then select to Add an Azure Storage Account.


Give the storage account a name (lowercase case and numbers only). Select the location of the storage account and the chosen level of redundancy.

Option Redundancy Comments
Locally Redundant 3 copies replicated within a single datacentre CheapestProtects against hardware failureDoes not protect against loss of facility or region
Zone-Redundant 3 copies replicated between 2 or 3 datacentres in a single region Protects against hardware failureProtect against loss of facilityDoes not protect against loss of region
Geo-Redundant 6 copies replicated 3 times within the primary region and 3 times in a secondary region Maximum durabilityProtects against hardware, facility and regional lossRecommended as default
Read-Access Geo-Redundant Same as Geo-Redundant, additionally grants Read-Access in the secondary region in the event of primary region loss Maximum durability and availabilityMost expensive

Once the storage account has been successfully created, it will appear under the Storage tab of the Azure Portal.


The final step is to download the Azure Recovery Services Agent and install on all hyper-V hosts. When installing the agent, the agent is smart enough to detect if there is a previous version present and attempt to upgrade it.

This is how the Azure Recovery Services Agent looks when its upgrading a previous version.


Now the Azure Recovery Services Agent has been installed on all the Hyper-V servers, this seems a natural point to end this first post. In summary, the on-premises Hyper-V and VMM servers have been configured and registered to the created Azure Site Recovery Vault. Everything is now in place to begin configuring the protection of on-premises clouds and resources.

The next part of this post will include:

  • Configuring cloud protection
  • Managing virtual machine protection
  • Changing the hardware sizing of virtual machines
  • Mapping networks
  • Recovery plans
  • Failover options.

Microsoft Azure Site Recovery: Between an on-premises VMM site and Azure – Part One

Microsoft Azure Site Recovery: Between an on-premises VMM site and Azure – Part Two

Microsoft Azure Site Recovery: Between an on-premises VMM site and Azure – Part Three


Building a private cloud within the MOD

I have recently been involved in designing and deploying a Hyper-V and SCVMM environment for Landmarc Support Services. They have since spoken at Microsoft Future Decoded, based around the road map used in building a private cloud within the MOD.


So whats NEW in Hyper-V vNext

After installing the Technical Preview of the next server OS, I was quite interested to see what new features Hyper-V came with. Obviously there are lots of new functionality in the new version and no doubt plenty more to come but here is a list of a few of the new features mentioned at TechEd.

The following list has been taken from TechNet and the TechEd slides available on Channel 9 link at the bottom of the post. 

Rolling Cluster Upgrade

You can now add a node running Windows Server Technical Preview to a Hyper-V Cluster with nodes running Windows Server 2012 R2. The cluster continues to function at a Windows Server 2012 R2 feature level until all of the nodes in the cluster have been upgraded and the cluster functional level has been upgraded.

  • No new hardware
  • No downtime
  • The ability to roll-back safely if needed

New VM Upgrade Process

When you move or import a virtual machine to a server running Hyper-V on Windows Server Technical Preview from Windows Server 2012 R2, the virtual machine’s configuration file is not automatically upgraded. This allows the virtual machine to be moved back to a server running Windows Server 2012 R2. You will not have access to new virtual machine features until you manually update the virtual machine configuration version.

Changing how we handle VM servicing

  • VM drivers (integration services) updated when needed
  • Require latest available VM drivers for that guest operating system
  • Drivers delivered directly to the guest operating system via Windows Update

Secure Boot Support for Linux

Linux operating systems running on generation 2 virtual machines can now boot with the secure boot option enabled.

Distributed Storage Qos

  • New architecture to improve reliability, scale and performance
  • You can now create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks on Hyper-V virtual machines. Storage performance is automatically readjusted to meet policies as the storage load fluctuates.

Evolving Hyper-V Backup

  • Decoupling backing up virtual machines from backing up the underlying storage.
  • No longer dependent on hardware snapshots for core backup functionality, but still able to take advantage of hardware capabilities when they are present.Most Hyper-V backup solutions today implement kernel level file system filters in order to gain efficiency.
  • Built in change tracing for Backup
  • Makes it hard for backup partners to update to newer versions of Windows
  • Increases the complexity of Hyper-V deployments New virtual machine configuration file
  • Efficient change tracking for backup is now part of the platform

VM Configuration Changes

  • Binary format for efficient performance at scale
  • Resilient logging for changes
  • New file extensions
    • .VMCX and .VMRS

Replica Support for Hot Add of VHDX

  • When you add a new virtual hard disk to a virtual machine that is being replicated – it is automatically added to the not-replicated set. This can be updated online.

Runtime Memory Resize

  • Dynamic memory is great, but more can be done. For Windows Server Technical Preview guests, you can now increase and decrease the memory assigned to virtual machines while they are running.

Network Adapter Identification

  • You can name individual network adapters in the virtual machine settings – and see the same name inside the guest operating system.

Hyper-V Manager Improvements

Multiple improvements to make it easier to remotely manage and troubleshoot Hyper-V Servers:

  • Connecting via WinRM
  • Support for alternate credentials
  • Connecting via IP address
  • Able to manage Windows Server 2012, 2012 2 and Technical Preview from a single console.

Hot add / remove of network adapters

  • Network adapters can be added and removed from Generation 2 virtual machines while they are running.

Hypervisor power management improvements  

  • Updated hypervisor power management model to support new modes of power management.

Hyper-V Cluster Management

Providing a single view of an entire Hyper-V cluster through WMI

  •  “Just one big Hyper-V server”
  • Limited functionality at this point in time:
    • Enumerate virtual machines
    • Receive notification of live migration event
  • Root\HyperVCluster\v2


  • Support for OpenGL 4.4 and OpenCL 1.1 API



Failover Cluster Nodes with Mixed Upper & Lower Case Names

Over the years I have come across clusters in all sorts of states. Many with nodes that have a mixture of none standardised names or letter case. In my mind, a naming standard for nodes should be decided on in advance, one that will allow for additional nodes to be added to the cluster at a later date.

The issue that I come across the most, is that of cluster node names that are a mixture of upper and lower case. This in itself wont stop the cluster functioning but it is a personnel bugbear of mine.


Nodes can be add to a cluster in a mixture of case due to a number of reasons such as the case of the NETBIOS name and its not something you can ever be sure of when using the GUI.

One way you can be sure that the case will remain at what you specify, is to use the cluster.exe command to add the nodes to the cluster. The following shows one of the ways it can be used to import a new node into a cluster.

cluster.exe /cluster:clustername /add /node:NODENAMEINCASE

This command can be used on a 2008, 2008 R2, 2012 and 2012 R2 clusters, however if planning to use cluster.exe on a 2012 or 2012 R2 cluster, you will first need to enable the Failover Cluster Command Interface feature. To do this open the Add Roles and Features Wizard then browse to Features\Remote Server Administration Tools\ Failover Clustering Tools\ and select the Failover Cluster Command Interface feature.


Once the feature has been enabled, its possible to go ahead and use cluster.exe to add the node into the cluster in the necessary case.


Once the command has run successfully and the node has been added and validated against the cluster, things will look nice and standardised.



‘Access to path: is denied’ error after installing SCVMM 2012 R2 console

After installing the SCVMM console you get the error message “Access to the path: “C:Program FilesMicrosoft System Center 2012 R2Virtual Machine” is denied.”


To resolve these issues, follow these steps:

  1. Locate the following folder: C:\Program Files\Microsoft System Center 2012 R2Virtual Machine Manager\bin
  2. Right-click the AddInPipeline folder, and then click Properties.
  3. On the Security tab, click Advanced, and then click Continue.
  4. Select the BUILTIN group, and then click Edit.
  5. Click the Select a principal link, type Authenticated Users, and then click OK.
  6. Click OK to close each dialog box that is associated with the properties.

The steps for this fix were originally taken form the Microsoft article below which relates to Update Rollup 1 for SCVMM 2012 R2 and a potential issue that might be encountered.