Cloud Computing, Containers, Hyper-V, Microsoft Azure, Nano Server, Networking / Infrastructure, Server 2019, Virtualization

Server 2019 is now available in preview

2019

Windows Server 2019 is built on the strong foundation of Windows Server 2016 and it is focusing on four themes were consistent – Hybrid, Security, Application Platform, and Hyper-converged infrastructure. Most people reckon Microsoft is pushing every customer in to Cloud slowly and we soon see no more option but moving to cloud. They will do this making costly staying on prem and starting with this edition they put their prices up.

Hybrid Cloud: This is the most common scenario for many companies , a hybrid approach, one that combines on-premises and cloud environments working together. Extending Active Directory, synchronizing file servers, and backup in the cloud are just a few examples of what companies are already doing today to extend their datacenters to the public cloud. In addition, a hybrid approach also allows for apps running on-premises to take advantage of innovation in the cloud such as Artificial Intelligence and IoT. Microsoft also introduced Project Honolulu in 2017 and this will be a one-stop management tool for IT pros.

Security: Microsoft’s approach to security is three-fold – Protect, Detect and Respond.
On the Protect front, They introduced Shielded VMs in Windows Server 2016, which was enthusiastically received by our customers. Shielded VMs protect virtual machines (VM) from compromised or malicious administrators in the fabric so only VM admins can access it on known, healthy, and attested guarded fabric. In Windows Server 2019, Shielded VMs will now support Linux VMs. They are also extending VMConnect to improve troubleshooting of Shielded VMs for Windows Server and Linux. They are adding Encrypted Networks that will let admins encrypt network segments, with a flip of a switch to protect the network layer between servers.

On the Detect and Respond front, in Windows Server 2019, they are embedding Windows Defender Advanced Threat Protection (ATP) that provides preventative protection, detects attacks and zero-day exploits among other capabilities, into the operating system. This gives companies access to deep kernel and memory sensors, improving performance and anti-tampering, and enabling response actions on server machines.

Application Platform: Microsoft focuses on the developer experience. Two key aspects to call out for the developer community are improvements to Windows Server containers and Windows Subsystem on Linux (WSL).

 In Windows Server 2019, Microsoft’s goal is to reduce the Server Core base container image to a third of its current size of 5 GB. This will reduce download time of the image by 72%, further optimizing the development time and performance.

They are also continuing to improve the choices available when it comes to orchestrating Windows Server container deployments. Kubernetes support is currently in beta, and in Windows Server 2019, they are introducing significant improvements to compute, storage, and networking components of a Kubernetes cluster.

Another improvement is that they previously extended Windows Subsystem on Linux (WSL) into insider builds for Windows Server, so that customers can run Linux containers side-by-side with Windows containers on a Windows Server. In Windows Server 2019, they are continuing to improve WSL, helping Linux users bring their scripts to Windows while using industry standards like OpenSSH, Curl & Tar.

Hyper-converged infrastructure (HCI): HCI is one of the latest trends in the server industry today. They partnered with industry leading hardware vendors to provide an affordable and yet extremely robust HCI solution with validated design. In Windows Server 2019 they are building on this platform by adding scale, performance, and reliability. They are also adding the ability to manage HCI deployments in Project Honolulu, to simplify the management and day-to-day activities on HCI environments.

Advertisements
Hyper-V, Server 2012 / R2, Server 2016, Virtualization

Hyper-V Integration Services

Hyper-V Integration Services allow a virtual machine to communicate with the Hyper-V host. Many of these services are conveniences, such as guest file copy, while others are important to the virtual machine’s ability to function correctly, such as time synchronization. This set of services are sometimes referred to as integration components,

integrationS

The Integration Services pane lists all integration services available on the Hyper-V host, and whether they’re turned on in the virtual machine. To get the version information for a guest operating system, log on to the guest operating system, open a command prompt, and run this command:

REG QUERY “HKLM\Software\Microsoft\Virtual Machine\Auto” /v IntegrationServicesVersion

PowerShell

Integration services

Name Windows Service Name Linux Daemon Name Description Impact on VM when disabled
Hyper-V Heartbeat Service vmicheartbeat hv_utils Reports that the virtual machine is running correctly. Varies
Hyper-V Guest Shutdown Service vmicshutdown hv_utils Allows the host to trigger virtual machines shutdown. High
Hyper-V Time Synchronization Service vmictimesync hv_utils Synchronizes the virtual machine’s clock with the host computer’s clock. High
Hyper-V Data Exchange Service (KVP) vmickvpexchange hv_kvp_daemon Provides a way to exchange basic metadata b etween the virtual machine and the host. Medium
Hyper-V Volume Shadow Copy Requestor vmicvss hv_vss_daemon Allows Volume Shadow Copy Service to back up the virtual machine with out shutting it down. Varies
Hyper-V Guest Service Interface vmicguestinterface hv_fcopy_daemon Provides an interface for the Hyper-V host to copy files to or from the virtual machine. Low
Hyper-V PowerShell Direct Service vmicvmsession not available Provides a way to manage virtual machine with PowerShell without a network connection. Low

Use Windows PowerShell to turn a integration service on or off

To do this in PowerShell, use Enable-VMIntegrationService and Disable-VMIntegrationService.

Get-VMIntegrationService -VMName “TestVM”

VMName Name Enabled PrimaryStatusDescription SecondaryStatusDescription
—— —- ——- ———————— ————————–
TestVM Guest Service Interface      False OK
TestVM Heartbeat                              True OK                                 OK
TestVM Key-Value Pair Exchange   True OK
TestVM Shutdown                              True OK
TestVM Time Synchronization        True OK
TestVM VSS                                          True OK

                                    Services Overview

Hyper-V Guest Shutdown Service

Windows Service Name: vmicshutdown
Linux Daemon Name: hv_utils
Description: Allows the Hyper-V host to request that the virtual machine shutdown. The host can always force the virtual machine to turn off, but that is like flipping the power switch as opposed to selecting shutdown.
Added In: Windows Server 2012, Windows 8
Impact: High Impact When disabled, the host can’t trigger a friendly shutdown inside the virtual machine. All shutdowns will be a hard power-off wich could cause data loss or data corruption.

Hyper-V Time Synchronization Service

Windows Service Name: vmictimesync
Linux Daemon Name: hv_utils
Description: Synchronizes the virtual machine’s system clock with the system clock of the physical computer.
Added In: Windows Server 2012, Windows 8
Impact: High Impact When disabled, the virtual machine’s clock will drift erratically.

Hyper-V Data Exchange Service (KVP)

Windows Service Name: vmickvpexchange
Linux Daemon Name: hv_kvp_daemon
Description: Provides a mechanism to exchange basic metadata between the virtual machine and the host.
Added In: Windows Server 2012, Windows 8
Impact: When disabled, virtual machines running Windows 8 or Windows Server 2012 or earlier will not receive updates to Hyper-V integration services. Disabling data exchange may also impact some kinds of monitoring and host-side diagnostics.+

The data exchange service (sometimes called KVP) shares small amounts of machine information between virtual machine and the Hyper-V host using key-value pairs (KVP) through the Windows registry. The same mechanism can also be used to share customized data between the virtual machine and the host.

Hyper-V Volume Shadow Copy Requestor

Windows Service Name: vmicvss
Linux Daemon Name: hv_vss_daemon
Description: Allows Volume Shadow Copy Service to back up applications and data on the virtual machine.
Added In: Windows Server 2012, Windows 8
Impact: When disabled, the virtual machine can not be backed up while running (using VSS).+

The Volume Shadow Copy Requestor integration service is required for Volume Shadow Copy Service (VSS). The Volume Shadow Copy Service (VSS) captures and copies images for backup on running systems, particularly servers, without unduly degrading the performance and stability of the services they provide. This integration service makes that possible by coordinating the virtual machine’s workloads with the host’s backup process.

Hyper-V Guest Service Interface

Windows Service Name: vmicguestinterface
Linux Daemon Name: hv_fcopy_daemon
Description: Provides an interface for the Hyper-V host to bidirectionally copy files to or from the virtual machine.
Added In: Windows Server 2012 R2, Windows 8.1
Impact: When disabled, the host can not copy files to and from the guest using Copy-VMFile.

Hyper-V PowerShell Direct Service

Windows Service Name: vmicvmsession
Linux Daemon Name: n/a
Description: Provides a mechanism to manage virtual machine with PowerShell via VM session without a virtual network.
Added In: Windows Server TP3, Windows 10
Impact: Disabling this service prevents the host from being able to connect to the virtual machine with PowerShell Direct.

Notes:
The service name was originally was Hyper-V VM Session Service.
PowerShell Direct is under active development and only available on Windows 10/Windows Server Technical Preview 3 or later hosts/guests.

PowerShell Direct allows PowerShell management inside a virtual machine from the Hyper-V host regardless of any network configuration or remote management settings on either the Hyper-V host or the virtual machine. This makes it easier for Hyper-V Administrators to automate and script management and configuration tasks.

Hyper-V, Server 2012 / R2, Server 2016, Virtualization

Advantages of Generation 2 VMs

Generation 2 VMs use synthetic drivers and software-based devices instead, and provide
advantages that include the following:

  • UEFI boot Instead of using the traditional BIOS, Generation 2 VMs support Secure Boot, using the Universal Extensible Firmware Interface (UEFI), which requires a system to boot from digitally signed drivers and enables them to boot from drives larger than 2 TB, with GUID partition tables. UEFI is fully emulated in VMs, regardless of the firmware in the physical host server.
  • SCSI disks Generation 2 VMs omit the IDE disk controller used by Generation 1 VMs to boot the system and use a high-performance virtual SCSI controller for all disks, enabling the VMs to boot from VHDX files, support up to 64 devices per controller, and perform hot disk adds and removes.
  • PXE boot The native virtual network adapter in Generation 2 VMs supports booting from a network server using the Preboot Execution Environment (PXE). Generation 1 VMs require you to use the legacy network adapter to support PXE booting.
  • SCSI boot Generation 2 VMs can boot from a SCSI device, which Generation 1 VMs cannot. Generation 2 VMs have no IDE or floppy controller support, and therefore cannot boot from these devices.
  • Boot volume size Generation 2 VMs can boot from a volume up to 64 TB in size, while Generation 1 boot volumes are limited to 2 TB.
  • VHDX boot volume resizing In a Generation 2 VM, you can expand or reduce a VHDX boot volume while the VM is running.
  • Software-based peripherals The keyboard, mouse, and videos drivers in a Generation 2 VM are software-based, not emulated, so they are less resource-intensive and provide a more secure environment.
  • Hot network adapters In Generation 2 VMs, you can add and remove virtual network adapters while the VM is running.
  • Enhanced Session Mode Generation 2 VMs support Enhanced Session Mode, which provides Hyper-V Manager and VMConnect connections to the VM with additional capabilities, such as audio, clipboard support, printer access, and USB devices.
  • Shielded virtual machines Generation 2 VMs can be shielded, so that the disk and the system state are encrypted and accessible only by authorized administrators.
  • Storage Spaces Direct Generation 2 VMs running Windows Server 2016 Datacenter Edition support Storage Spaces Direct, which can provide a high-performance, faulttolerant storage solution using local drives

 

Hyper-V, Server 2012 / R2, Server 2016, Virtualization

Enabling SR-IOV on VMs

The single root I/O virtualization (SR-IOV) interface is an extension to the PCI Express (PCIe) specification. SR-IOV allows a device, such as a network adapter, to separate access to its resources among various PCIe hardware functions. SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V virtualization stack. Because the VF is assigned to a child partition, the network traffic flows directly between the VF and child partition. As a result, the I/O overhead in the software emulation layer is diminished and achieves network performance that is nearly the same performance as in nonvirtualized environments.

Technically, there are two functions implemented by SR-IOV: physical functions (PFs) and virtual functions (VFs). There are a number of PCI devices available in which the PFs have been implemented, but Microsoft Hyper-V provides SR-IOV support only for networking. In other words, Microsoft Hyper-V provides VFs to allow VMs to communicate to the physical network adapters directly. Since the VMs can communicate directly with the physical network adapters, organizations may benefit from increasing I/O throughput, reducing CPU utilization on Hyper-V hosts for processing network traffic, and reducing network latency by enabling direct communication. Before you can use SR-IOV for a Hyper-V VM, you will need to meet the following prerequisites:

  • The SR-IOV functionality is currently only available to Windows 8 and Windows Server 2012 guests.
  • Hyper-V must be running on a Windows Server 2012 or later operating system.
  • You must have an SR-IOV-capable physical network adapter that implements the PFs and can understand the VFs’ requests coming from the VMs.
  • You must have an external virtual switch that can understand the SR-IOV traffic.
  • The server’s motherboard chipset must also support SR-IOV.

Enabling SR-IOV is a two-step approach. First, you need to create an external switch and enablecSR_IOV or if there is one already created but SR-IOV not enabled, you will need to delete this as this can only be enabled while you are creating the switch. Once the SR-IOV is enabled on the external virtual switch, you can enable SR-IOV on the VMs by checking the “Enable SR-IOV” checkbox found under the “Hardware Acceleration” under Network Adapter settings on the VM’s properties.

4_LI

214

44_LI

 

 

Networking / Infrastructure, Server 2012 / R2, Server 2016, Virtualization

RDMA and SMB Direct

Remote Direct Memory Access (RDMA) is a technology that allows data to be written directly on tot he memory without involving the processor, cache or operating system. RDMA enables more direct data movement in and out of a server by implementing a transport protocol in the network interface card NIC. The technology supports a feature called zero-copy networking that makes it possible to read data directly from the main memory of one computer and write that data directly to the main memory of another computer.

  • Enabled by default in Windows Server 2016

  • RDMA capable network adapter

  • RDMA and SMB Multichannel must be enabled and running

  • Best used with 10 gigabit plus networks

rdma

SMB Direct is SMB over RDMA.

Network adapters that have RDMA can function at full speed with very low latency, while using very little CPU. For workloads such as Hyper-V or Microsoft SQL Server, this enables a remote file server to resemble local storage. SMB Direct includes:

  • Increased throughput: Leverages the full throughput of high speed networks where the network adapters coordinate the transfer of large amounts of data at line speed.
  • Low latency: Provides extremely fast responses to network requests, and, as a result, makes remote file storage feel as if it is directly attached block storage.
  • Low CPU utilization: Uses fewer CPU cycles when transferring data over the network, which leaves more power available to server applications.

Requires
– Two servers running Windows Server 2012 or later
– One or more network adapters with RDMA capability
– Disabling SMB multichannel and RDMA disables SMB direct

Hyper-V, Server 2016, Virtualization

Server 2016 – Receive Side Scaling – RSS

RSS enables network adapters to distribute the kernel-mode network processing load across multiple processor cores in multi-core computers. The distribution of this processing makes it possible to support higher network traffic loads than would be possible if only a single core were to be used. In Windows Server 2012, RSS has been enhanced, including computers with more than sixty-four processors. RSS achieves this by spreading the network processing load across many processors and actively load balancing TCP terminated traffic.

When enabled, an network adapter I/O queue uses more than a single processor
core
If not enabled, uses a single core
– VMMQ has multiple queues and cores
– RSS older technology, but doesn’t have rigorous hardware requirements
Can be enabled on physical NIC (RSS)
Can be enabled on virtual NIC (vRSS)

You can use Virtual Receive Side Scaling (vRSS) to configure a virtual network adapter to load balance incoming network traffic across multiple logical processor cores in a VM or multiple physical cores for a host virtual Network Interface Card (vNIC).+

This configuration allows the load from a virtual network adapter to be distributed across multiple virtual processors in a virtual machine (VM), allowing the VM to process more network traffic more rapidly than it can with a single logical processor.

image2016-11-15+16_50_47

Load balance incoming network traffic across multiple virtual processors
With RSS it is physical network adapters and physical processor cores
– With vRSS it is Hyper-V network adapters and virtual processor cores
   vRSS requires that physical network adapters support VMQ
– Can’t use vRSS without VMQ capable adapters
– Run Get-NetAdapterVMQ as administrator to check if adapter supports VMQ

Hyper-V, Server 2016, Virtualization

Server 2016 – Virtual Machine Multi Queues – VMMQ

Virtual Machine Device Queues (VMDq) is a technology that allows the network adapter to create multiple separate queues, distributing the processing load across multiple cores.

Without VMDq, traffic is processed by a single processing core. Incoming traffic is processed in the following sequence on a physical adaptor linked to a virtual switch with many virtual machines .

  • VMMQ allows multiple I/O queues on network adapters to map to multiple virtualprocessor cores on VMs

  • Each I/O queue has an affinity with a specific virtual processor core

  • Once enabled on physical NIC, enable in hardware acceleration section of virtualNIC

  • VM must be assigned multiple virtual cores to take advantage of VMMQ

Run Get-NetAdapterVMQ as Administrator to verify if adapter supports VMQ