Cloud Computing, Containers, Hyper-V, Microsoft Azure, Nano Server, Networking / Infrastructure, Server 2019, Virtualization

Server 2019 is now available in preview

2019

Windows Server 2019 is built on the strong foundation of Windows Server 2016 and it is focusing on four themes were consistent – Hybrid, Security, Application Platform, and Hyper-converged infrastructure. Most people reckon Microsoft is pushing every customer in to Cloud slowly and we soon see no more option but moving to cloud. They will do this making costly staying on prem and starting with this edition they put their prices up.

Hybrid Cloud: This is the most common scenario for many companies , a hybrid approach, one that combines on-premises and cloud environments working together. Extending Active Directory, synchronizing file servers, and backup in the cloud are just a few examples of what companies are already doing today to extend their datacenters to the public cloud. In addition, a hybrid approach also allows for apps running on-premises to take advantage of innovation in the cloud such as Artificial Intelligence and IoT. Microsoft also introduced Project Honolulu in 2017 and this will be a one-stop management tool for IT pros.

Security: Microsoft’s approach to security is three-fold – Protect, Detect and Respond.
On the Protect front, They introduced Shielded VMs in Windows Server 2016, which was enthusiastically received by our customers. Shielded VMs protect virtual machines (VM) from compromised or malicious administrators in the fabric so only VM admins can access it on known, healthy, and attested guarded fabric. In Windows Server 2019, Shielded VMs will now support Linux VMs. They are also extending VMConnect to improve troubleshooting of Shielded VMs for Windows Server and Linux. They are adding Encrypted Networks that will let admins encrypt network segments, with a flip of a switch to protect the network layer between servers.

On the Detect and Respond front, in Windows Server 2019, they are embedding Windows Defender Advanced Threat Protection (ATP) that provides preventative protection, detects attacks and zero-day exploits among other capabilities, into the operating system. This gives companies access to deep kernel and memory sensors, improving performance and anti-tampering, and enabling response actions on server machines.

Application Platform: Microsoft focuses on the developer experience. Two key aspects to call out for the developer community are improvements to Windows Server containers and Windows Subsystem on Linux (WSL).

 In Windows Server 2019, Microsoft’s goal is to reduce the Server Core base container image to a third of its current size of 5 GB. This will reduce download time of the image by 72%, further optimizing the development time and performance.

They are also continuing to improve the choices available when it comes to orchestrating Windows Server container deployments. Kubernetes support is currently in beta, and in Windows Server 2019, they are introducing significant improvements to compute, storage, and networking components of a Kubernetes cluster.

Another improvement is that they previously extended Windows Subsystem on Linux (WSL) into insider builds for Windows Server, so that customers can run Linux containers side-by-side with Windows containers on a Windows Server. In Windows Server 2019, they are continuing to improve WSL, helping Linux users bring their scripts to Windows while using industry standards like OpenSSH, Curl & Tar.

Hyper-converged infrastructure (HCI): HCI is one of the latest trends in the server industry today. They partnered with industry leading hardware vendors to provide an affordable and yet extremely robust HCI solution with validated design. In Windows Server 2019 they are building on this platform by adding scale, performance, and reliability. They are also adding the ability to manage HCI deployments in Project Honolulu, to simplify the management and day-to-day activities on HCI environments.

Advertisements
Networking / Infrastructure, Server 2016

Publishing Remote Desktop Gateway through Web Application Proxy

If you want to restrict access to your Remote Access Gateway and add pre-authentication for remote access, you can roll it out through Web Application Proxy. This is a really good way to make sure you have rich pre-authentication for RDG including MFA. Publishing without pre-authentication is also an option and provides a single point of entry into your systems.

How to publish an application in RDG using Web Application Proxy pass-through authentication

  1. Installation will be different depending on whether your RD Web Access (/rdweb) and RD Gateway (rpc) roles are on the same server or on different servers.

     

  2. If the RD Web Access and RD Gateway roles are hosted on the same RDG server, you can simply publish the root FQDN in Web Application Proxy such as, https://connect.abc.com/.

    You can also publish the two virtual directories individually e.g. https://connect.abc.com/rdweb/ and https://connect.abc.com/rpc/.

     

  3. If the RD Web Access and the RD Gateway are hosted on separate RDG servers, you have to publish the two virtual directories individually. You can use the same or different external FQDN’s e.g. https://rdweb.abc.com/rdweb/ and https://gateway.abc.com/rpc/.

     

  4. If the External and Internal FQDN’s are different you should disable request header translation on the RDWeb publishing rule. This can be done by running the following PowerShell script on the Web Application Proxy server

    Get-WebApplicationProxyApplication applicationname | Set-WebApplicationProxyApplication -DisableTranslateUrlInRequestHeaders:$true
    
    System_CAPS_noteNote
    If you need to support rich clients such as RemoteApp and Desktop Connections or iOS Remote Desktop connections, these do not support pre-authentication so you have to publish RDG using pass-through authentication.
 To Publish a Web application;
Add-WebApplicationProxyApplication -Name “CompApp”
-ExternalPreauthentication ADFS -ExternalUrl https://CompApp.Contoso.com/
-ExternalCertificateThumbprint “70DF0AB8434060DC869D37BBAEF770ED5DD0C32B”
-BackendServerUrl http://CompApp:8080/ -ADFSRelyingPartyName “CompAppRP”
to omit external preauthentication;
Add-WebApplicationProxyApplication -Name “CompApp” -BackendServerUrl http://CompApp/ -ExternalUrl https://CompApp.Contoso.com/
-ExternalPreauthentication “PassThrough” -ExternalCertificateThumbprint “A1A657E1A4F276FCC45613C0F6B3BC91AFC4633C”
Networking / Infrastructure, Server 2012 / R2, Server 2016, Virtualization

RDMA and SMB Direct

Remote Direct Memory Access (RDMA) is a technology that allows data to be written directly on tot he memory without involving the processor, cache or operating system. RDMA enables more direct data movement in and out of a server by implementing a transport protocol in the network interface card NIC. The technology supports a feature called zero-copy networking that makes it possible to read data directly from the main memory of one computer and write that data directly to the main memory of another computer.

  • Enabled by default in Windows Server 2016

  • RDMA capable network adapter

  • RDMA and SMB Multichannel must be enabled and running

  • Best used with 10 gigabit plus networks

rdma

SMB Direct is SMB over RDMA.

Network adapters that have RDMA can function at full speed with very low latency, while using very little CPU. For workloads such as Hyper-V or Microsoft SQL Server, this enables a remote file server to resemble local storage. SMB Direct includes:

  • Increased throughput: Leverages the full throughput of high speed networks where the network adapters coordinate the transfer of large amounts of data at line speed.
  • Low latency: Provides extremely fast responses to network requests, and, as a result, makes remote file storage feel as if it is directly attached block storage.
  • Low CPU utilization: Uses fewer CPU cycles when transferring data over the network, which leaves more power available to server applications.

Requires
– Two servers running Windows Server 2012 or later
– One or more network adapters with RDMA capability
– Disabling SMB multichannel and RDMA disables SMB direct

HP Chassis, Networking / Infrastructure

Backing up a HP Chassis

It is always best practice to take a backup of a chassis before making any changes especially a firmware update.

First of all access your desired chassis then go to;

Enclosure Settings > Enclosure Bay IP Addressing > Configuration Scripts

hp

On the right hand side you will see;

SHOW CONFIG: Click to view a configuration script containing the current settings for this enclosure.

hp1

SHOW ALL: Click to view a script containing a list of the enclosure’s current inventory.

hp2

You can save the configuration files using notepad to somewhere safe. So these two files are used for the recovery if something goes wrong.

So when you need a recovery go to

Enclosure Settings > Enclosure Bay IP Addressing > Configuration Scripts again. On the left hand side;

hp3

 

You need to browse to locate those backup configuration files you have saved and upload them. It might take a while and after completion make sure everything is as expected.

Failover Clustering, Hyper-V, Networking / Infrastructure

Microsoft Scale-Out File Server

Scale-out File Server: Traditional file share prior to Server 2012 had some sort of limitations and in some cases these limitations turned into issues. Allowing only one node in the cluster to access the disk associated with the virtual file server and SMB file share brings limited I/O throughput in the cluster.

1fo.GIF

With Server 2012, in Failover clustering you have got another option; Scale-out file server for application data. This allows multiple nodes to have simultaneous high speed direct I/O access to disk associated with SMB shares on cluster shared volumes. Load balancing across the cluster is achieved with a new cluster resource called the Distributed Network Name (DNN), which uses a round robin scheduling algorithm to select the next node in the cluster for SMB connections.

20o

Key benefits provided by Scale-Out File Server in include:

  • Active-Active file shares: All cluster nodes can accept and serve SMB client requests. By making the file share content accessible through all cluster nodes simultaneously, SMB 3.0 clusters and clients cooperate to provide transparent failover to alternative cluster nodes during planned maintenance and unplanned failures with service interruption.
  • Increased bandwidth: The maximum share bandwidth is the total bandwidth of all file server cluster nodes. Unlike previous versions of Windows Server, the total bandwidth is no longer constrained to the bandwidth of a single cluster node; but rather, the capability of the backing storage system defines the constraints. You can increase the total bandwidth by adding nodes.
  • CHKDSK with zero downtime: CHKDSK in Windows Server 2012 is significantly enhanced to dramatically shorten the time a file system is offline for repair. Clustered shared volumes (CSVs) take this one step further by eliminating the offline phase. A CSV File System (CSVFS) can use CHKDSK without impacting applications with open handles on the file system.
  • Clustered Shared Volume cache:  CSVs in Windows Server 2012 introduces support for a Read cache, which can significantly improve performance in certain scenarios, such as in Virtual Desktop Infrastructure (VDI).
  • Simpler management: With Scale-Out File Server, you create the scale-out file servers, and then add the necessary CSVs and file shares. It is no longer necessary to create multiple clustered file servers, each with separate cluster disks, and then develop placement policies to ensure activity on each cluster node.
  • Automatic rebalancing of Scale-Out File Server clients: In Windows Server 2012 R2, automatic rebalancing improves scalability and manageability for scale-out file servers. SMB client connections are tracked per file share (instead of per server), and clients are then redirected to the cluster node with the best access to the volume used by the file share. This improves efficiency by reducing redirection traffic between file server nodes. Clients are redirected following an initial connection and when cluster storage is reconfigured.

But scale-out file servers are not ideal for all scenarios. Microsoft gives us some examples of server applications that can store their data on a scale-out file share which are;

  • The Internet Information Services (IIS) Web server can store configuration and data for Web sites on a scale-out file share.
  • Hyper-V can store configuration and live virtual disks on a scale-out file share.
  • SQL Server can store live database files on a scale-out file share.
  • Virtual Machine Manager (VMM) can store a library share (which contains virtual machine templates and related files) on a scale-out file share. However, the library server itself can’t be a Scale-Out File Server – it must be on a stand-alone server or a failover cluster that doesn’t use the Scale-Out File Server cluster role. If you use a scale-out file share as a library share, you can use only technologies that are compatible with Scale-Out File Server. For example, you can’t use DFS Replication to replicate a library share hosted on a scale-out file share. It’s also important that the scale-out file server have the latest software updates installed. To use a scale-out file share as a library share, first add a library server (likely a virtual machine) with a local share or no shares at all. Then when you add a library share, choose a file share that’s hosted on a scale-out file server. This share should be VMM-managed and created exclusively for use by the library server. Also make sure to install the latest updates on the scale-out file server.

By looking at this list, these server applications uses a few files which are big in size. Comparing with traditional file sharing which involves considerable amount of files with different sizes. Again something to bear in mind, some users, such as information workers, have workloads that have a greater impact on performance. For example, operations like opening and closing files, creating new files, and renaming existing files, when performed by multiple users, have an impact on performance. If a file share is enabled with continuous availability, it provides data integrity, but it also affects the overall performance. Continuous availability requires that data writes through to the disk to ensure integrity in the event of a failure of a cluster node in a Scale-Out File Server. Therefore, a user that copies several large files to a file server can expect significantly slower performance on continuously available file share.

3bg

 

Certification, Cloud Computing, Networking / Infrastructure

Microsoft Azure – How to Configure a VNet-to-VNet connection

In your infrastructure you will probably have a few virtual networks (VNETs). They might be premises sites or azure VNETs. You can connect these multiple VNETs to each other. Virtual network connectivity can be used simultaneously with multi-site VPNs, with a maximum of 10 VPN tunnels for a virtual network VPN gateway connecting to ether other virtual networks or on-premises sites.

What I have got here in my scenario is: 2 sites, one in US and one in Europe which we will create; (Basically 2 sites in 2 different regions). Connecting a virtual network to another virtual network (VNET-to-VNET) is very similar to connecting a virtual network to an on-premises site location.  A couple of different steps such as downloading the script created by Azure and running it on your on premises gateway device. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE.

Capture121

Let’s create these VNETS now;

Log in to the Azure Classic Portal (not the Azure Portal). In the lower left-hand corner of the screen, click New. In the navigation pane, click Network Services, and then click Virtual Network. Click Custom Create to begin the configuration wizard.

Captur

On the Virtual Network Details page, enter the VNET name and choose your location (region).

On the DNS Servers and VPN Connectivity page, enter your DNS server name and IP address. You are not going to create one. This is purely name resolution for this virtual network. And don’t click any boxes, leave them as they are.

Captu

On the Virtual Network Address Spaces page, specify the address range that you want to use for your virtual network. In my case for Us it will be 10.20.0.0 /16 .These are the dynamic IP addresses (DIPS) that will be assigned to the VMs and other role instances that you deploy to this virtual network. It’s especially important to select a range that does not overlap with any of the ranges that are used for your on-premises network. You will get error message informing you that you have chosen an overlapped network range. You can modify your subnet here and create other subnets for other services but for now these are not required.

Capt

 

Click on the to create it. Create another VNET following the steps above. I will choose 10.10.0.0 /16 and North Europe for my VNET-EU.

Capt1

 

Next we need to add local networks to these virtual networks. I will configure each VNET as a local network. Microsoft refers local networks as on premises network.

Capt12

 

In the lower left-hand corner of the screen, click New. In the navigation pane, click Network Services, and then click Virtual Network. Click Add Local Network

Capt23

 

On the Specify your local network details page, for Name, enter the name of a virtual network that you want to use in your VNet-to-VNet configuration. For this example, I’ll use VNET-EU, as we’ll be pointing VNET-US to this virtual network for our configuration.

For VPN Device IP Address, use any IP address. Typically, you’d use the actual external IP address for a VPN device. For VNet-to-VNet configurations, you will use the Gateway IP address. But, given that you haven’t created the gateway yet, I will use an IP address from my IP range for now. (10.10.0.50). I will then go back into these settings and configure them with the corresponding gateway IP addresses once Azure generates it. Do the same steps for VNET-US and choose 10.20.0.50

Next I will have to point each VNET to each other as Local Network. Go to Networks and then click on the first VNET and click Configure. Scroll down to Connection and tick the box for Connect to the Local Network and choose the other VNET under Local Network.

C

 

In the virtual network address spaces section on the same page, click add gateway subnet, then click the save icon at the bottom of the page to save your configuration.

C1

 

Repeat the step for VNET-US to specify VNET-EU as a local network.

Next step will be creating dynamic routing gateways for each VNET. On the Networks page, make sure the status column for your virtual network is Created.

C2

 

In the Name column, click the name of your virtual network.

On the Dashboard page, notice that this VNet doesn’t have a gateway configured yet. You’ll see this status change as you go through the steps to configure your gateway. At the bottom of the page, click Create Gateway. You must select Dynamic Routing.

C4

When the system prompts you to confirm that you want the gateway created, click Yes. Repeat the same steps for the other VNET. When your gateway is creating, notice the gateway graphic on the page changes to yellow and says Creating Gateway. It typically takes about 15-20 minutes for the gateway to create.

C5

After gateways created, they will be assigned IP addresses and we need to modify our Local Network IPs we assigned temporary when we added them to VNETs to these IPs.

C7

After everything has completed we will need to make sure each connection and both sides of the gateway are using the same PRESHARED KEY.

I will use Powershell to complete this part. First connect to your subscription’

p1

 

And then just check your VNET connections using Get-AzureVNetConnection

p2

 

Lastly run;

Set-AzureVNetGatewayKey -VNetName VNET-EU -LocalNetworkSiteName VNET-US -SharedKey 123456789

Set-AzureVNetGatewayKey -VNetName VNET-US -LocalNetworkSiteName VNET-EU -SharedKey 123456789

(Make sure for production environment you use much better shared keys)

p3

 

And you will see connection is successful.

Capture7777

Capture888

 

 

 

Cloud Computing, Networking / Infrastructure

Microsoft’s StorSimple

 

 

Standard storage management comes with a few challenges. Thinking about capacity, types, tiering, provisioning, scalability. Making plans and decisions take long time and on top of these what about backups and data archiving? They bring their own challenges. Many hours spent just to keep everything the way we wanted around our budget.

I first heard about Storsimple when I was watching one of Azure MVA courses. The idea of storing data based on how often they are accessed and making decisions on where to store is simple and at the same time makes business sense. Applying a simple logic will save a lot of effort and money in the long run.

So If you are tired of buying and installing storage and rebalancing workloads all the time, you really need to have a look at StorSimple. The same is true for managing data protection, because off-site data protection is completely automated with StorSimple you have got no worries on that. And if you can’t perform DR because it’s too disruptive and takes too long, you need to look into non-disruptive, thin recovery with StorSimple.

How does it work?

 

StorSimple is comprised SSDs (split into two layers), HHDs and Cloud. When a user saved the data it goes to first SSD layer on the StorSimple appliance. And this is deduplicated to the second SSD layer along with the other data comes from other users. When the data is accessed it remains active and stays “hot”. Again users keep creating other data and previously created data becomes less accessed and becomes “cold”. Then this data is moved to the HDD layer and it is also compressed in this layer. As the other data becomes cold and they are all moved to the HDD layer and as the data keeps coming to this layer, it will filled up and reach the threshold. The original data moved to the cloud. When the data is in Azure, it will be copied 3 times locally and another 3 copies will be geo-replicated within region. As you may guess it will be a delayed respond when the user requests that data in the cloud as opposed to the data saved on the other layers. Data will be pulled by StorSimple from Azure and presented to the user and it is managed by Azure portal. The new StorSimple Manager is an Azure management portal which controls all functions of all StorSimple arrays across the enterprise. It provides a single, consolidated management point that uses the Internet as a control plane to configure all parameters of StorSimple 8000 arrays and for displaying up-to-the-minute status information in a comprehensive dashboard.