Hyper-V, Server 2012 / R2, Server 2016, Virtualization

Enabling SR-IOV on VMs

The single root I/O virtualization (SR-IOV) interface is an extension to the PCI Express (PCIe) specification. SR-IOV allows a device, such as a network adapter, to separate access to its resources among various PCIe hardware functions. SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V virtualization stack. Because the VF is assigned to a child partition, the network traffic flows directly between the VF and child partition. As a result, the I/O overhead in the software emulation layer is diminished and achieves network performance that is nearly the same performance as in nonvirtualized environments.

Technically, there are two functions implemented by SR-IOV: physical functions (PFs) and virtual functions (VFs). There are a number of PCI devices available in which the PFs have been implemented, but Microsoft Hyper-V provides SR-IOV support only for networking. In other words, Microsoft Hyper-V provides VFs to allow VMs to communicate to the physical network adapters directly. Since the VMs can communicate directly with the physical network adapters, organizations may benefit from increasing I/O throughput, reducing CPU utilization on Hyper-V hosts for processing network traffic, and reducing network latency by enabling direct communication. Before you can use SR-IOV for a Hyper-V VM, you will need to meet the following prerequisites:

  • The SR-IOV functionality is currently only available to Windows 8 and Windows Server 2012 guests.
  • Hyper-V must be running on a Windows Server 2012 or later operating system.
  • You must have an SR-IOV-capable physical network adapter that implements the PFs and can understand the VFs’ requests coming from the VMs.
  • You must have an external virtual switch that can understand the SR-IOV traffic.
  • The server’s motherboard chipset must also support SR-IOV.

Enabling SR-IOV is a two-step approach. First, you need to create an external switch and enablecSR_IOV or if there is one already created but SR-IOV not enabled, you will need to delete this as this can only be enabled while you are creating the switch. Once the SR-IOV is enabled on the external virtual switch, you can enable SR-IOV on the VMs by checking the “Enable SR-IOV” checkbox found under the “Hardware Acceleration” under Network Adapter settings on the VM’s properties.

4_LI

214

44_LI

 

 

Advertisements
Server 2016, Hyper-V

Which VMs can be shielded?

The shielding process for existing VMs is only available for VMs that meet the following prerequisites:

  • The guest OS is Windows Server 2012, 2012 R2, 2016, or a semi-annual channel release. Existing Linux VMs cannot be converted to shielded VMs.
  • The VM is a generation 2 VM (UEFI firmware)
  • The VM does not use differencing disks for its OS volume.
Server 2016, Hyper-V, Server 2012 / R2

Automatic Virtual Machine Activation

Automatic Virtual Machine Activation was a feature that was added in Windows Server 2012 R2 that enables the activation of your VMs without using a KMS server or MAK key without the requirement of  internet connectivity.  As you create new VMs they activate against the host Hyper-v server. This method of activation only lasts 7 days before the VM renews it’s activation.  Ideal for Datacenter hosts as you can also report on this too.

AVMA requires the Hyper-v host to be running Server 2012 R2 or 2016 Datacenter and it must be activated.   The VMs that run on the host must be at least 2012 R2 or above to activate.  VM’s that can be activated using this method include 2012 R2/2016 Datacenter, Standard and Essentials.

AVMA offers several benefits:

* Activate virtual machines in remote locations
* Activate virtual machines with or without an internet connection
* Track virtual machine usage and licenses from the virtualization server, without requiring any access rights on the virtualized systems

SO

There is no true “configuration” for the virtual machine. When prompted for a license key, you simply give it the key that matches the operating system of the virtual machine.

Guest Operating System Key
Windows Server 2012 R2 Essentials K2XGM-NMBT3-2R6Q8-WF2FK-P36R2
Windows Server 2012 R2 Standard DBGBW-NPF86-BJVTX-K3WKJ-MTB6V
Windows Server 2012 R2 Datacenter Y4TGP-NPTV9-HTC2H-7MGQ3-DV4TW
Windows Server 2016 Essentials B4YNW-62DX9-W8V6M-82649-MHBKQ
Windows Server 2016 Standard C3RCX-M6NRP-6CXC9-TW2F2-4RHYD
Windows Server 2016 Datacenter TMJ3Y-NTRTM-FJYXT-T22BY-CWG3J

In order for the VM’s to talk to the host for activation, the Data Exchange option needs to be activated on the Integration Services.  To ensure this is enabled click on the Settings of the VM and ensure the option is selected.

Capture3

Hyper-V, Server 2016, Virtualization

Server 2016 – Receive Side Scaling – RSS

RSS enables network adapters to distribute the kernel-mode network processing load across multiple processor cores in multi-core computers. The distribution of this processing makes it possible to support higher network traffic loads than would be possible if only a single core were to be used. In Windows Server 2012, RSS has been enhanced, including computers with more than sixty-four processors. RSS achieves this by spreading the network processing load across many processors and actively load balancing TCP terminated traffic.

When enabled, an network adapter I/O queue uses more than a single processor
core
If not enabled, uses a single core
– VMMQ has multiple queues and cores
– RSS older technology, but doesn’t have rigorous hardware requirements
Can be enabled on physical NIC (RSS)
Can be enabled on virtual NIC (vRSS)

You can use Virtual Receive Side Scaling (vRSS) to configure a virtual network adapter to load balance incoming network traffic across multiple logical processor cores in a VM or multiple physical cores for a host virtual Network Interface Card (vNIC).+

This configuration allows the load from a virtual network adapter to be distributed across multiple virtual processors in a virtual machine (VM), allowing the VM to process more network traffic more rapidly than it can with a single logical processor.

image2016-11-15+16_50_47

Load balance incoming network traffic across multiple virtual processors
With RSS it is physical network adapters and physical processor cores
– With vRSS it is Hyper-V network adapters and virtual processor cores
   vRSS requires that physical network adapters support VMQ
– Can’t use vRSS without VMQ capable adapters
– Run Get-NetAdapterVMQ as administrator to check if adapter supports VMQ

Hyper-V, Server 2016, Virtualization

Server 2016 – Virtual Machine Multi Queues – VMMQ

Virtual Machine Device Queues (VMDq) is a technology that allows the network adapter to create multiple separate queues, distributing the processing load across multiple cores.

Without VMDq, traffic is processed by a single processing core. Incoming traffic is processed in the following sequence on a physical adaptor linked to a virtual switch with many virtual machines .

  • VMMQ allows multiple I/O queues on network adapters to map to multiple virtualprocessor cores on VMs

  • Each I/O queue has an affinity with a specific virtual processor core

  • Once enabled on physical NIC, enable in hardware acceleration section of virtualNIC

  • VM must be assigned multiple virtual cores to take advantage of VMMQ

Run Get-NetAdapterVMQ as Administrator to verify if adapter supports VMQ

 

Hyper-V, Server 2016, Virtualization

Server 2016 – SMB Multi-channel

SMB Multichannel, a feature included with Windows Server 2012 R2 and Windows Server 2012 and part of the Server Message Block (SMB) 3.0 protocol, increases the network performance and availability of file servers.

SMB Multichannel enables file servers to use multiple network connections simultaneously. It facilitates aggregation of network bandwidth and network fault tolerance when multiple paths are available between the SMB 3.0 client and the SMB 3.0 server. This capability allows server applications to take full advantage of all available network bandwidth and makes them resilient to network failures.

SMB Multichannel provides the following capabilities:

  • Increased throughput. The file server can simultaneously transmit additional data by using multiple connections for high-speed network adapters or multiple network adapters.
  • Network fault tolerance. When clients simultaneously use multiple network connections, the clients can continue without interruption despite the loss of a network connection.
  • Automatic configuration. SMB Multichannel automatically discovers multiple available network paths and dynamically adds connections as necessary.

REQUIREMENTS; 

SMB Multichannel has the following requirements:

  • At least two computers that run on Windows Server 2012 R2, Windows Server 2012, or Windows 8 operating systems are required. No additional features have to be installed—SMB Multichannel is enabled by default.
  • At least one of the following configurations:
    • Multiple network adapters
    • One or more network adapters that support Receive Side Scaling (RSS)
    • One of more network adapters that are configured by using NIC Teaming
    • One or more network adapters that support remote direct memory access (RDMA)

So,

  • Use all available NIC in a computer to share file sharing load
  • Does not require NICs to be on the same subnet
  • SMB 1 and SMB 2 clients use a single NIC connection to retrieve files

SMB Multichannel provides the following capabilities:

  • Increased throughput. The file server can simultaneously transmit additional data by using multiple connections for high-speed network adapters or multiple network adapters.
  • Network fault tolerance. When clients simultaneously use multiple network connections, the clients can continue without interruption despite the loss of a network connection.
  • Automatic configuration. SMB Multichannel automatically discovers multiple available network paths and dynamically adds connections as necessary.
Hyper-V, Server 2016, Virtualization

Server 2016 – Switch Embedded Teaming

SET is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.+

SET allows you to group between one and eight physical Ethernet network adapters into one or more software-based virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the event of a network adapter failure.

Windows Server 2012 R2 doesn’t allow RDMA on NIC bound to NIC team or Hyper-
V virtual switch. Windows Server 2012 R2 required 2 sets of NIC, one for RDMA, one for regular IP.

SET allows one set of switches to be used for Hyper-V vSwitch with RDMA.
NICs support regular IP traffic as well as RDMA.

0216red_F2SDN_Figure3_hires