Administrator Roles in Azure AD

In various Azure projects we needed to assign certain roles to our users in Azure AD. Main thing is to understand their tasks and scope of responsibilities. Azure gives us a few roles which give users to access various features such as managing subscriptions, assigning other administrator roles, password reset, managing service requests and managing user account. When we assign these roles to users, they will access all these features across all of the cloud services that your organization has subscribed to. This is very important to bear in mind.

admin

The following administrator roles are available:

  • Billing administrator: Makes purchases, manages subscriptions, manages support tickets, and monitors service health.
  • Global administrator: Has access to all administrative features. The person who signs up for the Azure account becomes a global administrator. Only global administrators can assign other administrator roles. There can be more than one global administrator at your company.
  • Password administrator: Resets passwords, manages service requests, and monitors service health. Password administrators can reset passwords only for users and other password administrators.
  • Service administrator: Manages service requests and monitors service health.
    Note: To assign the service administrator role to a user, the global administrator must first assign administrative permissions to the user in the service, such as Exchange Online, and then assign the service administrator role to the user in the Azure classic portal.
  • User administrator: Resets passwords, monitors service health, and manages user accounts, user groups, and service requests. Some limitations apply to the permissions of a user management administrator. For example, they cannot delete a global administrator or create other administrators. Also, they cannot reset passwords for billing, global, and service administrators.

Administrator permissions

Billing administrator

Can do Cannot do
View company and user information

Manage Office support tickets

Perform billing and purchasing operations for Office products

Reset user passwords

Create and manage user views

Create, edit, and delete users and groups, and manage user licenses

Manage domains

Manage company information

Delegate administrative roles to others

Use directory synchronization

Global administrator

Can do Cannot do
View company and user information

Manage Office support tickets

Perform billing and purchasing operations for Office products

Reset user passwords

Create and manage user views

Create, edit, and delete users and groups, and manage user licenses

Manage domains

Manage company information

Delegate administrative roles to others

Use directory synchronization

Enable or disable multi-factor authentication

N/A

Password administrator

Can do Cannot do
View company and user information

Manage Office support tickets

Reset user passwords

Perform billing and purchasing operations for Office products

Create and manage user views

Create, edit, and delete users and groups, and manage user licenses

Manage domains

Manage company information

Delegate administrative roles to others

Use directory synchronization

Service administrator

Can do Cannot do
View company and user information

Manage Office support tickets

Reset user passwords

Perform billing and purchasing operations for Office products

Create and manage user views

Create, edit, and delete users and groups, and manage user licenses

Manage domains

Manage company information

Delegate administrative roles to others

Use directory synchronization

User administrator

Can do Cannot do
View company and user information

Manage Office support tickets

Reset user passwords, with limitations. He or she cannot reset passwords for billing, global, and service administrators.

Create and manage user views

Create, edit, and delete users and groups, and manage user licenses, with limitations. He or she cannot delete a global administrator or create other administrators.

Perform billing and purchasing operations for Office products

Manage domains

Manage company information

Delegate administrative roles to others

Use directory synchronization

Enable or disable multi-factor authentication

Blob copy between Storage Accounts in Azure

Previously I copied a vhd file called Nano2016.vhd from my on-premises server in to Azure. Now I need to use this file for my other storage account. So it means I need to copy this file in to my other storage account.

StorageAccount01  —————–> StorageAccount02

vhds container        —————–> uploads container (will be created during copy)

The script I am going to run is;

$vhdName = “Nano2016.vhd”
$srcContainer = “vhds”
$destContainer = “uploads”
$srcStorageAccount = “Storageaccount01”
$destStorageAccount = “StorageAccount02”

$srcStorageKey = (Get-AzureStorageKey -StorageAccountName` $srcStorageAccount).Primary
$destStorageKey = (Get-AzureStorageKey -StorageAccountName` $destStorageAccount).Primary
$srcContext = New-AzureStorageContext –StorageAccountName` $srcStorageAccount `
-StorageAccountKey $srcStorageKey

$destContext = New-AzureStorageContext –StorageAccountName ` $destStorageAccount `
-StorageAccountKey $destStorageKey

New-AzureStorageContainer -Name $destContainer `
-Context $destContext
$copiedBlob = Start-AzureStorageBlobCopy -SrcBlob $vhdName `
-SrcContainer $srcContainer `
-Context $srcContext `
-DestContainer $destContainer `
-DestBlob $vhdName `
-DestContext $destContext

$copiedBlob | Get-AzureStorageBlobCopyState

As you see we need to gather some info to such as our file name, source container which our file hosted “vhds”, destination container which in my case it will be “uploads” and will be created during running this script and of course our storage accounts. you can use both Azure portal or powershell for these.

Add-AzureAccount
Get-AzureStorageAccount | Format-Table -Property Label

From Azure Portal

Storage> you can get all the storage accounts and click on your source storage account and then containers to choose your container and the file.

Cap9

and also get your storage account names so that we can use them for our script.

$vhdName = “Nano2016.vhd”
$srcContainer = “vhds”
$destContainer = “uploads”
$srcStorageAccount = “Storageaccount01”
$destStorageAccount = “StorageAccount02”

After that we can run our script. I will run one line at a time to see the steps clearly..

PS C:\Users\user> $vhdName = “Nano2016.vhd”
$srcContainer = “https://storageaccount01.blob.core.windows.net/vhds/”
$destContainer = “https://storageaccount02.blob.core.windows.net/vhds/”
$srcStorageAccount = “storageaccount01”
$destStorageAccount = “storageaccount02”

and

PS C:\Users\user> $srcStorageKey = (Get-AzureStorageKey -StorageAccountName $srcStorageAccount).Primary
PS C:\Users\user> $destStorageKey = (Get-AzureStorageKey -StorageAccountName $destStorageAccount).Primary
PS C:\Users\user> $srcContext = New-AzureStorageContext –StorageAccountName $srcStorageAccount `
-StorageAccountKey $srcStorageKey
PS C:\Users\user> $destContext = New-AzureStorageContext –StorageAccountName $destStorageAccount `
-StorageAccountKey $destStorageKey

Next to create a new container in destination storage account;

PS C:\Users\user> New-AzureStorageContainer -Name $destContainer `
-Context $destContext

Blob End Point: https://storageaccount02.blob.core.windows.net/

Name                     PublicAccess                          LastModified
—-                        ————                      ————
uploads                  Off                                               2/25/2016 12:06:05 PM +00:00

Finally starting copy task;

PS C:\Users\user> $copiedBlob = Start-AzureStorageBlobCopy -SrcBlob $vhdName `
-SrcContainer $srcContainer `
-Context $srcContext `
-DestContainer $destContainer `
-DestBlob $vhdName `
-DestContext $destContext

PS C:\Users\user>

As you see you don’t get any information about copying process…

So we need to run another cmdlet;

PS C:\Users\user> $copiedBlob | Get-AzureStorageBlobCopyState
CopyId : 6557ef45-b677-4bbe-92b5-b9888676acf6
CompletionTime :
Status : Pending   <————
Source : https://storageaccount01.blob.core.windows.net/vhds/Nano2016.vhd?sv=2015-04-05&sr=b&sig=a4cENt%2BLQGVJDoAUs5Pz42q2hHExN
wTW557lN%2BxfUlM%3D&se=2016-03-03T12:06:15Z&sp=r
BytesCopied : 0
TotalBytes : 107374182912
StatusDescription :

Or just go to Azure Portal > Storage > Destination Storage Account > Container

in my case I can see a new contaier “uploads” been created and my file is there.

Capture223

And again if you run;

PS C:\Users\user> $copiedBlob | Get-AzureStorageBlobCopyState

CopyId : 6557ef45-b677-4bbe-92b5-b9888676acf6
CompletionTime : 2/25/2016 12:08:46 PM +00:00
Status : Success  <—————-
Source : https://storageaccount02.blob.core.windows.net/vhds/Nano2016.vhd?sv=2015-04-05&sr=b&sig=a4cENt%2BLQGVJDoAUs5Pz42q2hHExN
wTW557lN%2BxfUlM%3D&se=2016-03-03T12:06:15Z&sp=r
BytesCopied : 107374182912
TotalBytes : 107374182912
StatusDescription :

As you see the status has changed to Success from Pending…

Let’s review the parameters in the preceding example:
■■ The SrcBlob parameter expects the file name of source file to start copying.
■■ The SrcContainer parameter is the container the source file resides in.
■■ The Context parameter accepts a context object created by the New-AzureStorageContext cmdlet. The context has the storage account name and key for the source storage account and is used for authentication.
■■ The DestContainer is the destination container to copy the blob to. The call will fail if
this container does not exist on the destination storage account.
■■ The DestBlob parameter is the filename of the blob on the destination storage account.
The destination blob name does not have to be the same as the source.
■■ The DestContext parameter also accepts a context object created with the details of
the destination storage account including the authentication key.

P:S. To copy between storage accounts in separate subscriptions, you need to call Select-AzureSubscription between the calls to Get-AzureStorageKey to switch to the alternate subscription.

 

 

Microsoft Scale-Out File Server

Scale-out File Server: Traditional file share prior to Server 2012 had some sort of limitations and in some cases these limitations turned into issues. Allowing only one node in the cluster to access the disk associated with the virtual file server and SMB file share brings limited I/O throughput in the cluster.

1fo.GIF

With Server 2012, in Failover clustering you have got another option; Scale-out file server for application data. This allows multiple nodes to have simultaneous high speed direct I/O access to disk associated with SMB shares on cluster shared volumes. Load balancing across the cluster is achieved with a new cluster resource called the Distributed Network Name (DNN), which uses a round robin scheduling algorithm to select the next node in the cluster for SMB connections.

20o

Key benefits provided by Scale-Out File Server in include:

  • Active-Active file shares: All cluster nodes can accept and serve SMB client requests. By making the file share content accessible through all cluster nodes simultaneously, SMB 3.0 clusters and clients cooperate to provide transparent failover to alternative cluster nodes during planned maintenance and unplanned failures with service interruption.
  • Increased bandwidth: The maximum share bandwidth is the total bandwidth of all file server cluster nodes. Unlike previous versions of Windows Server, the total bandwidth is no longer constrained to the bandwidth of a single cluster node; but rather, the capability of the backing storage system defines the constraints. You can increase the total bandwidth by adding nodes.
  • CHKDSK with zero downtime: CHKDSK in Windows Server 2012 is significantly enhanced to dramatically shorten the time a file system is offline for repair. Clustered shared volumes (CSVs) take this one step further by eliminating the offline phase. A CSV File System (CSVFS) can use CHKDSK without impacting applications with open handles on the file system.
  • Clustered Shared Volume cache:  CSVs in Windows Server 2012 introduces support for a Read cache, which can significantly improve performance in certain scenarios, such as in Virtual Desktop Infrastructure (VDI).
  • Simpler management: With Scale-Out File Server, you create the scale-out file servers, and then add the necessary CSVs and file shares. It is no longer necessary to create multiple clustered file servers, each with separate cluster disks, and then develop placement policies to ensure activity on each cluster node.
  • Automatic rebalancing of Scale-Out File Server clients: In Windows Server 2012 R2, automatic rebalancing improves scalability and manageability for scale-out file servers. SMB client connections are tracked per file share (instead of per server), and clients are then redirected to the cluster node with the best access to the volume used by the file share. This improves efficiency by reducing redirection traffic between file server nodes. Clients are redirected following an initial connection and when cluster storage is reconfigured.

But scale-out file servers are not ideal for all scenarios. Microsoft gives us some examples of server applications that can store their data on a scale-out file share which are;

  • The Internet Information Services (IIS) Web server can store configuration and data for Web sites on a scale-out file share.
  • Hyper-V can store configuration and live virtual disks on a scale-out file share.
  • SQL Server can store live database files on a scale-out file share.
  • Virtual Machine Manager (VMM) can store a library share (which contains virtual machine templates and related files) on a scale-out file share. However, the library server itself can’t be a Scale-Out File Server – it must be on a stand-alone server or a failover cluster that doesn’t use the Scale-Out File Server cluster role. If you use a scale-out file share as a library share, you can use only technologies that are compatible with Scale-Out File Server. For example, you can’t use DFS Replication to replicate a library share hosted on a scale-out file share. It’s also important that the scale-out file server have the latest software updates installed. To use a scale-out file share as a library share, first add a library server (likely a virtual machine) with a local share or no shares at all. Then when you add a library share, choose a file share that’s hosted on a scale-out file server. This share should be VMM-managed and created exclusively for use by the library server. Also make sure to install the latest updates on the scale-out file server.

By looking at this list, these server applications uses a few files which are big in size. Comparing with traditional file sharing which involves considerable amount of files with different sizes. Again something to bear in mind, some users, such as information workers, have workloads that have a greater impact on performance. For example, operations like opening and closing files, creating new files, and renaming existing files, when performed by multiple users, have an impact on performance. If a file share is enabled with continuous availability, it provides data integrity, but it also affects the overall performance. Continuous availability requires that data writes through to the disk to ensure integrity in the event of a failure of a cluster node in a Scale-Out File Server. Therefore, a user that copies several large files to a file server can expect significantly slower performance on continuously available file share.

3bg

 

Uploading a VHD file to Azure Storage

Get your Azure Subscription details first….

PS C:\Users\user> Get-AzureRmSubscription

SubscriptionName : Free Trial

SubscriptionId   : ba6d424f-f1d9-40c1-be8d-123f6aad8a7b

TenantId         : dc608e75-2124-48be-9dbc-7a248dc51fb2

State            : Enabled

 

PS C:\Users\user> Get-AzureRmSubscription | ft -Property SubscriptionName

SubscriptionName

—————-

Free Trial                                                                                                                                        

PS C:\Users\user> Get-AzurePublishSettingsFile

PS C:\Users\user> Import-AzurePublishSettingsFile

cmdlet Import-AzurePublishSettingsFile at command pipeline position 1

Supply values for the following parameters:

(Type !? for Help.)

PublishSettingsFile: “C:\book\Free Trial-2-20-2016-credentials.publishsettings”

Id          : ba6d424f-f1d9-40c1-be8d-123456789

Name        : Free Trial

Environment : AzureCloud

Account     : B62082C30D0DA9850123456789

State       :

Properties  : {[Default, True]}

We will need our storage account details such as Name (Label)

PS C:\Users\user> Get-AzureStorageAccount | Format-Table -Property Label

Label

—–

2mportalvhdstorage01

llportalvhds9storage02 

msenelstorage012345abscdr

ndportalvhdsmrw0123456

Select your Storage Account

PS C:\Users\user> $storageAccountName = ‘llportalvhds9storage02 ‘

Set-AzureSubscription -SubscriptionName “Free Trial” -CurrentStorageAccountName $storageAccountName

PS C:\Users\user> Get-AzureStorageContainer

Blob End Point: https://llportalvhds9storage02 .blob.core.windows.net/

 

Name                                           PublicAccess         LastModified

—-                                             ——-                   ———-

vhds                                                 Off                       2/18/2016 11:21:41 AM +00:00

I have a copy of Nano server VHD. I think this is the smallest one I can use (543MB). Don’t forget you will be copying your vhd in to the cloud and it will take some time.

 PS C:\Users\user> $LocalVHD = “C:\book\Nano2016.vhd”

$AzureVHD = “https://llportalvhds9storage02 .blob.core.windows.net/vhds/Nano2016.vhd&#8221;

Add-AzureVhd -LocalFilePath $LocalVHD -Destination $AzureVHD

Calculating….

1579

 Uploading….

1578

After finished uploading…..

157

 

 

 

Go to Storage in Azure Portal,

Capture1

Find your Storage Account and click on it,

15

Go to Containers and click on vhds, you should be able to see your vhd file.

 Capture2

Making a copy of the VHD file;

It is always good practice to have a copy of some of your vhds if they are critical to your business. Otherwise you will go through everything from the beginning to upload them again. So;

Go back to Storage and at the bottom of the screen choose Manage Access Keys..

156

Get the storage account name and primary access key for our powershell cmdlet; copy them on to notepad so you can reuse them.

Capture3

#Storage account needs to be authenticated

$context = New-AzureStorageContext -StorageAccountName “llportalvhds9storage02″`

 -StorageAccountKey “abcdefghijklmnopqrstuvyz”  -Protocol Http

$containername = “vhds”

$blobname = “Nano2016.vhd”

#From storage we need to get the VHD’s blob

$blob = Get-AzureStorageBlob -Context $context -Container $containername -Blob $blobname

$uri = “https://llportalvhds9storage02 .blob.core.windows.net/vhds/Nano2016.vhd&#8221;

Start-AzureStorageBlobCopy -SrcUri $uri -DestContext $context -DestContainer $containername -DestBlob “Nano2016.vhd-copy.vhd”

To check it, either go to Azure Portal > Storage > Click on the Storage Account > vhds

Capture4

Or use Powershell to check it if it is in our container (vhds);

$context = New-AzureStorageContext -StorageAccountName “llportalvhds9g0s6kqhgqxw”`

 -StorageAccountKey “abcdefghijklmnopqrstuvyz”  -Protocol Http

$containername = “vhds”

$blobcopyname = ” Nano2016.vhd-copy.vhd”

 

#From storage we need to get the VHD’s blob

$blobcopy = Get-AzureStorageBlob -Context $context -Container $containername`

-Blob $blobcopyname

$blobcopy

1

 

Creating an image from a VHD file

 Go to Virtual Machines in Azure Portal and choose Images;

 Capture5

 Click on the “Create an image”

 3456789

456789

Now you can create virtual machine using your image….

56789

Creating a Disk from a VHD file

Go to Virtual Machines in Azure Portal and choose Disks;

 6789

At the bottom of the screen choose Create….

789

If it is OS disk make sure you click “This VHD contains an operating system”..

89

And again you can use this disk to create your virtual machines….

9

Using  PowerShell;

$AzureVHD = “https://llportalvhds9storage02 .blob.core.windows.net/vhds/Nano2016.vhd”

 Add-AzureDisk -DiskName ‘NanoOSDisk’ -MediaLocation $AzureVHD `
-Label ‘Nano Server 2016 OS Disk’ -OS Windows

 

Microsoft Azure – How to Configure a VNet-to-VNet connection

In your infrastructure you will probably have a few virtual networks (VNETs). They might be premises sites or azure VNETs. You can connect these multiple VNETs to each other. Virtual network connectivity can be used simultaneously with multi-site VPNs, with a maximum of 10 VPN tunnels for a virtual network VPN gateway connecting to ether other virtual networks or on-premises sites.

What I have got here in my scenario is: 2 sites, one in US and one in Europe which we will create; (Basically 2 sites in 2 different regions). Connecting a virtual network to another virtual network (VNET-to-VNET) is very similar to connecting a virtual network to an on-premises site location.  A couple of different steps such as downloading the script created by Azure and running it on your on premises gateway device. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE.

Capture121

Let’s create these VNETS now;

Log in to the Azure Classic Portal (not the Azure Portal). In the lower left-hand corner of the screen, click New. In the navigation pane, click Network Services, and then click Virtual Network. Click Custom Create to begin the configuration wizard.

Captur

On the Virtual Network Details page, enter the VNET name and choose your location (region).

On the DNS Servers and VPN Connectivity page, enter your DNS server name and IP address. You are not going to create one. This is purely name resolution for this virtual network. And don’t click any boxes, leave them as they are.

Captu

On the Virtual Network Address Spaces page, specify the address range that you want to use for your virtual network. In my case for Us it will be 10.20.0.0 /16 .These are the dynamic IP addresses (DIPS) that will be assigned to the VMs and other role instances that you deploy to this virtual network. It’s especially important to select a range that does not overlap with any of the ranges that are used for your on-premises network. You will get error message informing you that you have chosen an overlapped network range. You can modify your subnet here and create other subnets for other services but for now these are not required.

Capt

 

Click on the to create it. Create another VNET following the steps above. I will choose 10.10.0.0 /16 and North Europe for my VNET-EU.

Capt1

 

Next we need to add local networks to these virtual networks. I will configure each VNET as a local network. Microsoft refers local networks as on premises network.

Capt12

 

In the lower left-hand corner of the screen, click New. In the navigation pane, click Network Services, and then click Virtual Network. Click Add Local Network

Capt23

 

On the Specify your local network details page, for Name, enter the name of a virtual network that you want to use in your VNet-to-VNet configuration. For this example, I’ll use VNET-EU, as we’ll be pointing VNET-US to this virtual network for our configuration.

For VPN Device IP Address, use any IP address. Typically, you’d use the actual external IP address for a VPN device. For VNet-to-VNet configurations, you will use the Gateway IP address. But, given that you haven’t created the gateway yet, I will use an IP address from my IP range for now. (10.10.0.50). I will then go back into these settings and configure them with the corresponding gateway IP addresses once Azure generates it. Do the same steps for VNET-US and choose 10.20.0.50

Next I will have to point each VNET to each other as Local Network. Go to Networks and then click on the first VNET and click Configure. Scroll down to Connection and tick the box for Connect to the Local Network and choose the other VNET under Local Network.

C

 

In the virtual network address spaces section on the same page, click add gateway subnet, then click the save icon at the bottom of the page to save your configuration.

C1

 

Repeat the step for VNET-US to specify VNET-EU as a local network.

Next step will be creating dynamic routing gateways for each VNET. On the Networks page, make sure the status column for your virtual network is Created.

C2

 

In the Name column, click the name of your virtual network.

On the Dashboard page, notice that this VNet doesn’t have a gateway configured yet. You’ll see this status change as you go through the steps to configure your gateway. At the bottom of the page, click Create Gateway. You must select Dynamic Routing.

C4

When the system prompts you to confirm that you want the gateway created, click Yes. Repeat the same steps for the other VNET. When your gateway is creating, notice the gateway graphic on the page changes to yellow and says Creating Gateway. It typically takes about 15-20 minutes for the gateway to create.

C5

After gateways created, they will be assigned IP addresses and we need to modify our Local Network IPs we assigned temporary when we added them to VNETs to these IPs.

C7

After everything has completed we will need to make sure each connection and both sides of the gateway are using the same PRESHARED KEY.

I will use Powershell to complete this part. First connect to your subscription’

p1

 

And then just check your VNET connections using Get-AzureVNetConnection

p2

 

Lastly run;

Set-AzureVNetGatewayKey -VNetName VNET-EU -LocalNetworkSiteName VNET-US -SharedKey 123456789

Set-AzureVNetGatewayKey -VNetName VNET-US -LocalNetworkSiteName VNET-EU -SharedKey 123456789

(Make sure for production environment you use much better shared keys)

p3

 

And you will see connection is successful.

Capture7777

Capture888

 

 

 

Microsoft’s StorSimple

 

 

Standard storage management comes with a few challenges. Thinking about capacity, types, tiering, provisioning, scalability. Making plans and decisions take long time and on top of these what about backups and data archiving? They bring their own challenges. Many hours spent just to keep everything the way we wanted around our budget.

I first heard about Storsimple when I was watching one of Azure MVA courses. The idea of storing data based on how often they are accessed and making decisions on where to store is simple and at the same time makes business sense. Applying a simple logic will save a lot of effort and money in the long run.

So If you are tired of buying and installing storage and rebalancing workloads all the time, you really need to have a look at StorSimple. The same is true for managing data protection, because off-site data protection is completely automated with StorSimple you have got no worries on that. And if you can’t perform DR because it’s too disruptive and takes too long, you need to look into non-disruptive, thin recovery with StorSimple.

How does it work?

 

StorSimple is comprised SSDs (split into two layers), HHDs and Cloud. When a user saved the data it goes to first SSD layer on the StorSimple appliance. And this is deduplicated to the second SSD layer along with the other data comes from other users. When the data is accessed it remains active and stays “hot”. Again users keep creating other data and previously created data becomes less accessed and becomes “cold”. Then this data is moved to the HDD layer and it is also compressed in this layer. As the other data becomes cold and they are all moved to the HDD layer and as the data keeps coming to this layer, it will filled up and reach the threshold. The original data moved to the cloud. When the data is in Azure, it will be copied 3 times locally and another 3 copies will be geo-replicated within region. As you may guess it will be a delayed respond when the user requests that data in the cloud as opposed to the data saved on the other layers. Data will be pulled by StorSimple from Azure and presented to the user and it is managed by Azure portal. The new StorSimple Manager is an Azure management portal which controls all functions of all StorSimple arrays across the enterprise. It provides a single, consolidated management point that uses the Internet as a control plane to configure all parameters of StorSimple 8000 arrays and for displaying up-to-the-minute status information in a comprehensive dashboard.

Hyper-V improvements on Server 2012 R2

Hyper-V improvements on Server 2012 R2

Hyper-V Advanced Options

Some of Hyper-V improvements on Server 2012 R2 are really going to make your life easier;

  • Compression option for Live Migration; Memory of the virtual machine being migrated is compressed and then copied over the network to the destination over a TCP/IP connection, resulting in huge performance improvements of Live Migrations. Typically live migration time with compression is a fifth of the time than when no compression is used.
  • SMB option for Live Migration; Memory of the virtual machine being migrated is compressed and then copied over the network to the destination over a SMB connection. Uses SMB Direct (RDMA). This doesn’t use compression since the processor is bypassed when using RDMA, and this gives the greatest Live Migration experience–22 seconds (RDMA) versus 38 seconds (memory compression).
  • Hyper-V Replica: Frequency of replication was only every five minutes, and now we have got a choice of 30 seconds, five minutes, and 15 minutes.
  • Extended Replication: Ability to have a Hyper-V Replica virtual machine (VM) replicated to another Hyper-V server on another site for extended disaster recovery (DR) capabilities. We can now extend this replication over to a third site.
  • Hyper-V Recovery Manager provides Windows Azure-based service to manage all Hyper-V Replication within an environment. This is just orchestration of the process and the actual replication of VMs is still utilized via the Hyper-V Replica functionality. Actual replication is still direct site-to-site and not via Windows Azure. Hyper-V Recovery Manager provides full control of the order of failover of VMs in addition to the running of scripts and even manual actions as part of the failover process.
  • Support for deduplication of VHDXs, which actually improves the performanceof the VMs (almost twice the speed). With deduplication, the process knows what the common blocks are, so it enables better caching of the most common blocks for running VMs.
  • Dynamic VHDX resizing when connected to the SCSI bus, allowing VHDX files to be resized while a VM is running, and then within a VM you can easily expand volumes to use the newly available space. It’s also possible to shrink a virtual disk, provided unpartioned space is available on a disk.
  • Support for shared VHDX files by VMS, enabling new guest clustering scenariosby allowing multiple VMs to access the same VHDX and see the VHDX as shared storage. The shared VHDX is exposed to a VM as a virtual shared SAS disk and the VHDX file can be dynamic or fixed. You can specify a disk as shared via Windows PowerShell or via the Advanced Features when adding a disk to a VM. The ability to provide a shared VHDX file is very useful, particularly in hosting environments where you would not directly expose fibre channel or iSCSI LUNs to clients (the Windows Server 2012 method to provide shared storage to VMs).
  • VM Connection improvements, allowing copy and paste via VM Connection plus audio support/printer/smart card redirection as Remote Access is now via VMBus. This is a very useful feature if a VM loses network connectivity or RDP is blocked via a firewall so administrators can’t RDP to a VM. With the new VMbus-based connection, administrators always have full access to VMs at a console level.
  • Automatic activation of VMs that are running on an activated Windows Server 2012 R2 Datacenter server (no specific channel version required). The VM doesn’t have a key at all, so no key management is required.
  • New Generation 2 VM:
    – Generation 2 VMs use UEFI, have secure boot capability, and can boot from SCSI devices and synthetic network adapters.
    – Generation 2 VMs are Windows 8/Windows 2012 or later and 64-bit only. This is because the OS needs native UEFI and must ship with Hyper-V integration drivers in-box.
    – Generation 2 provides a faster boot and install experience. Day-to-day operations are about the same.
    – It is fully supported to mix Generation 1 and Generation 2 VMs on the same host.
  • Virtual Receive Side Scaling (RSS) allows a combination of RSS and VMQ (which were mutually exclusive in Windows Server 2012) and allows a VMQ to no longer be linked to a single core, giving greater performance by spreading loads across cores. Uses a RSS hash to spread traffic processing across multiple cores.
  • Resource metering monitors incoming and outgoing storage IOPs in addition to existing CPU, memory, disk allocation, and network traffic.
  • Storage QoS allows a maximum IOPs cap for each VHDX of a VM(even when running). Minimum QoS alerting when a virtual machine disk isn’t getting required IOPs.
  • Full Linux support including dynamic memory support (add and remove), live backup (file consistency through new file freeze in Linux integration services), 64 vCPU SMP, virtual SCSI, and hot-add/resize of storage.
  • Clone a running VM and export a checkpoint (which generates a merged virtual disk for the export).
  • Cluster Shared Volumes coordinators automatically rebalanced across all nodes in the cluster.
  • Use ReFS with CSV.
  • Easier virtual network management (Software Defined Networking) and additional capabilities including in-box gateway functionality to link different virtual networks even across hybrid clouds.
  • Simple remote live monitoring of network traffic through a new graphical experience using Message Analyzer, which can collect remote and local packets.
  • Enhanced Hyper-V Extensible Switch architecture to enable coexistence with forwarding extension implementations, which previously couldn’t work with Hyper-V extensions.
  • USB pass-through, which allows USB device pass-through within certain conditions.
  • Windows Azure Pack (formally known as Windows Azure Services for Windows Server). Takes the innovation from Azure and brings it to Windows Server and System Center. Consistent portal experience to match Azure, high density web hosting, and Service bus.