Backing up a HP Chassis

It is always best practice to take a backup of a chassis before making any changes especially a firmware update.

First of all access your desired chassis then go to;

Enclosure Settings > Enclosure Bay IP Addressing > Configuration Scripts

hp

On the right hand side you will see;

SHOW CONFIG: Click to view a configuration script containing the current settings for this enclosure.

hp1

SHOW ALL: Click to view a script containing a list of the enclosure’s current inventory.

hp2

You can save the configuration files using notepad to somewhere safe. So these two files are used for the recovery if something goes wrong.

So when you need a recovery go to

Enclosure Settings > Enclosure Bay IP Addressing > Configuration Scripts again. On the left hand side;

hp3

 

You need to browse to locate those backup configuration files you have saved and upload them. It might take a while and after completion make sure everything is as expected.

Microsoft Scale-Out File Server

Scale-out File Server: Traditional file share prior to Server 2012 had some sort of limitations and in some cases these limitations turned into issues. Allowing only one node in the cluster to access the disk associated with the virtual file server and SMB file share brings limited I/O throughput in the cluster.

1fo.GIF

With Server 2012, in Failover clustering you have got another option; Scale-out file server for application data. This allows multiple nodes to have simultaneous high speed direct I/O access to disk associated with SMB shares on cluster shared volumes. Load balancing across the cluster is achieved with a new cluster resource called the Distributed Network Name (DNN), which uses a round robin scheduling algorithm to select the next node in the cluster for SMB connections.

20o

Key benefits provided by Scale-Out File Server in include:

  • Active-Active file shares: All cluster nodes can accept and serve SMB client requests. By making the file share content accessible through all cluster nodes simultaneously, SMB 3.0 clusters and clients cooperate to provide transparent failover to alternative cluster nodes during planned maintenance and unplanned failures with service interruption.
  • Increased bandwidth: The maximum share bandwidth is the total bandwidth of all file server cluster nodes. Unlike previous versions of Windows Server, the total bandwidth is no longer constrained to the bandwidth of a single cluster node; but rather, the capability of the backing storage system defines the constraints. You can increase the total bandwidth by adding nodes.
  • CHKDSK with zero downtime: CHKDSK in Windows Server 2012 is significantly enhanced to dramatically shorten the time a file system is offline for repair. Clustered shared volumes (CSVs) take this one step further by eliminating the offline phase. A CSV File System (CSVFS) can use CHKDSK without impacting applications with open handles on the file system.
  • Clustered Shared Volume cache:  CSVs in Windows Server 2012 introduces support for a Read cache, which can significantly improve performance in certain scenarios, such as in Virtual Desktop Infrastructure (VDI).
  • Simpler management: With Scale-Out File Server, you create the scale-out file servers, and then add the necessary CSVs and file shares. It is no longer necessary to create multiple clustered file servers, each with separate cluster disks, and then develop placement policies to ensure activity on each cluster node.
  • Automatic rebalancing of Scale-Out File Server clients: In Windows Server 2012 R2, automatic rebalancing improves scalability and manageability for scale-out file servers. SMB client connections are tracked per file share (instead of per server), and clients are then redirected to the cluster node with the best access to the volume used by the file share. This improves efficiency by reducing redirection traffic between file server nodes. Clients are redirected following an initial connection and when cluster storage is reconfigured.

But scale-out file servers are not ideal for all scenarios. Microsoft gives us some examples of server applications that can store their data on a scale-out file share which are;

  • The Internet Information Services (IIS) Web server can store configuration and data for Web sites on a scale-out file share.
  • Hyper-V can store configuration and live virtual disks on a scale-out file share.
  • SQL Server can store live database files on a scale-out file share.
  • Virtual Machine Manager (VMM) can store a library share (which contains virtual machine templates and related files) on a scale-out file share. However, the library server itself can’t be a Scale-Out File Server – it must be on a stand-alone server or a failover cluster that doesn’t use the Scale-Out File Server cluster role. If you use a scale-out file share as a library share, you can use only technologies that are compatible with Scale-Out File Server. For example, you can’t use DFS Replication to replicate a library share hosted on a scale-out file share. It’s also important that the scale-out file server have the latest software updates installed. To use a scale-out file share as a library share, first add a library server (likely a virtual machine) with a local share or no shares at all. Then when you add a library share, choose a file share that’s hosted on a scale-out file server. This share should be VMM-managed and created exclusively for use by the library server. Also make sure to install the latest updates on the scale-out file server.

By looking at this list, these server applications uses a few files which are big in size. Comparing with traditional file sharing which involves considerable amount of files with different sizes. Again something to bear in mind, some users, such as information workers, have workloads that have a greater impact on performance. For example, operations like opening and closing files, creating new files, and renaming existing files, when performed by multiple users, have an impact on performance. If a file share is enabled with continuous availability, it provides data integrity, but it also affects the overall performance. Continuous availability requires that data writes through to the disk to ensure integrity in the event of a failure of a cluster node in a Scale-Out File Server. Therefore, a user that copies several large files to a file server can expect significantly slower performance on continuously available file share.

3bg

 

Microsoft Azure – How to Configure a VNet-to-VNet connection

In your infrastructure you will probably have a few virtual networks (VNETs). They might be premises sites or azure VNETs. You can connect these multiple VNETs to each other. Virtual network connectivity can be used simultaneously with multi-site VPNs, with a maximum of 10 VPN tunnels for a virtual network VPN gateway connecting to ether other virtual networks or on-premises sites.

What I have got here in my scenario is: 2 sites, one in US and one in Europe which we will create; (Basically 2 sites in 2 different regions). Connecting a virtual network to another virtual network (VNET-to-VNET) is very similar to connecting a virtual network to an on-premises site location.  A couple of different steps such as downloading the script created by Azure and running it on your on premises gateway device. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE.

Capture121

Let’s create these VNETS now;

Log in to the Azure Classic Portal (not the Azure Portal). In the lower left-hand corner of the screen, click New. In the navigation pane, click Network Services, and then click Virtual Network. Click Custom Create to begin the configuration wizard.

Captur

On the Virtual Network Details page, enter the VNET name and choose your location (region).

On the DNS Servers and VPN Connectivity page, enter your DNS server name and IP address. You are not going to create one. This is purely name resolution for this virtual network. And don’t click any boxes, leave them as they are.

Captu

On the Virtual Network Address Spaces page, specify the address range that you want to use for your virtual network. In my case for Us it will be 10.20.0.0 /16 .These are the dynamic IP addresses (DIPS) that will be assigned to the VMs and other role instances that you deploy to this virtual network. It’s especially important to select a range that does not overlap with any of the ranges that are used for your on-premises network. You will get error message informing you that you have chosen an overlapped network range. You can modify your subnet here and create other subnets for other services but for now these are not required.

Capt

 

Click on the to create it. Create another VNET following the steps above. I will choose 10.10.0.0 /16 and North Europe for my VNET-EU.

Capt1

 

Next we need to add local networks to these virtual networks. I will configure each VNET as a local network. Microsoft refers local networks as on premises network.

Capt12

 

In the lower left-hand corner of the screen, click New. In the navigation pane, click Network Services, and then click Virtual Network. Click Add Local Network

Capt23

 

On the Specify your local network details page, for Name, enter the name of a virtual network that you want to use in your VNet-to-VNet configuration. For this example, I’ll use VNET-EU, as we’ll be pointing VNET-US to this virtual network for our configuration.

For VPN Device IP Address, use any IP address. Typically, you’d use the actual external IP address for a VPN device. For VNet-to-VNet configurations, you will use the Gateway IP address. But, given that you haven’t created the gateway yet, I will use an IP address from my IP range for now. (10.10.0.50). I will then go back into these settings and configure them with the corresponding gateway IP addresses once Azure generates it. Do the same steps for VNET-US and choose 10.20.0.50

Next I will have to point each VNET to each other as Local Network. Go to Networks and then click on the first VNET and click Configure. Scroll down to Connection and tick the box for Connect to the Local Network and choose the other VNET under Local Network.

C

 

In the virtual network address spaces section on the same page, click add gateway subnet, then click the save icon at the bottom of the page to save your configuration.

C1

 

Repeat the step for VNET-US to specify VNET-EU as a local network.

Next step will be creating dynamic routing gateways for each VNET. On the Networks page, make sure the status column for your virtual network is Created.

C2

 

In the Name column, click the name of your virtual network.

On the Dashboard page, notice that this VNet doesn’t have a gateway configured yet. You’ll see this status change as you go through the steps to configure your gateway. At the bottom of the page, click Create Gateway. You must select Dynamic Routing.

C4

When the system prompts you to confirm that you want the gateway created, click Yes. Repeat the same steps for the other VNET. When your gateway is creating, notice the gateway graphic on the page changes to yellow and says Creating Gateway. It typically takes about 15-20 minutes for the gateway to create.

C5

After gateways created, they will be assigned IP addresses and we need to modify our Local Network IPs we assigned temporary when we added them to VNETs to these IPs.

C7

After everything has completed we will need to make sure each connection and both sides of the gateway are using the same PRESHARED KEY.

I will use Powershell to complete this part. First connect to your subscription’

p1

 

And then just check your VNET connections using Get-AzureVNetConnection

p2

 

Lastly run;

Set-AzureVNetGatewayKey -VNetName VNET-EU -LocalNetworkSiteName VNET-US -SharedKey 123456789

Set-AzureVNetGatewayKey -VNetName VNET-US -LocalNetworkSiteName VNET-EU -SharedKey 123456789

(Make sure for production environment you use much better shared keys)

p3

 

And you will see connection is successful.

Capture7777

Capture888

 

 

 

Microsoft’s StorSimple

 

 

Standard storage management comes with a few challenges. Thinking about capacity, types, tiering, provisioning, scalability. Making plans and decisions take long time and on top of these what about backups and data archiving? They bring their own challenges. Many hours spent just to keep everything the way we wanted around our budget.

I first heard about Storsimple when I was watching one of Azure MVA courses. The idea of storing data based on how often they are accessed and making decisions on where to store is simple and at the same time makes business sense. Applying a simple logic will save a lot of effort and money in the long run.

So If you are tired of buying and installing storage and rebalancing workloads all the time, you really need to have a look at StorSimple. The same is true for managing data protection, because off-site data protection is completely automated with StorSimple you have got no worries on that. And if you can’t perform DR because it’s too disruptive and takes too long, you need to look into non-disruptive, thin recovery with StorSimple.

How does it work?

 

StorSimple is comprised SSDs (split into two layers), HHDs and Cloud. When a user saved the data it goes to first SSD layer on the StorSimple appliance. And this is deduplicated to the second SSD layer along with the other data comes from other users. When the data is accessed it remains active and stays “hot”. Again users keep creating other data and previously created data becomes less accessed and becomes “cold”. Then this data is moved to the HDD layer and it is also compressed in this layer. As the other data becomes cold and they are all moved to the HDD layer and as the data keeps coming to this layer, it will filled up and reach the threshold. The original data moved to the cloud. When the data is in Azure, it will be copied 3 times locally and another 3 copies will be geo-replicated within region. As you may guess it will be a delayed respond when the user requests that data in the cloud as opposed to the data saved on the other layers. Data will be pulled by StorSimple from Azure and presented to the user and it is managed by Azure portal. The new StorSimple Manager is an Azure management portal which controls all functions of all StorSimple arrays across the enterprise. It provides a single, consolidated management point that uses the Internet as a control plane to configure all parameters of StorSimple 8000 arrays and for displaying up-to-the-minute status information in a comprehensive dashboard.

HP Gen9 ProLiant Servers

The ProLiant Gen9 lineup is optimized for convergence, cloud, and software-defined environments with an architecture based on a pool of processing resources that is geography- and workload-agnostic. In addition, the ProLiant Gen9 servers leverage HP’s PCIe application accelerators as well as the company’s DDR4 SmartMemory technology.

HP ProLiant BL460c Gen9 blade server

The HP ProLiant BL460c Gen9 blade is built to offer performance, scalability, economics and manageability through HP OneView, for converged data centers at the lowest cost and fastest time to value with latest innovations. The blade server is also designed for a variety of configuration and deployment options to provide the flexibility of enhancing core IT applications with the right-sized storage for the right workload, resulting in lower total cost of ownership (TCO).

The HP ProLiant BL460c adapts to any demanding blades environment, including virtualization, IT and Web infrastructure, collaborative systems, cloud, and high-performance computing.

  • The HP ProLiant BL460c Gen9 Server Blade delivers up to a 70 percent performance increase with the new Intel Xeon E5-2600 v3 processors1 and the enhanced HP DDR4 SmartMemory at speeds up to 2,133 MHz.
  • New flexible internal storage controller options strike the right balance between performance and price, helping to lower overall TCO.
  • With the BL460c Gen9 Server Blade, you have standard internal USB 3.0, as well as future support for redundant microSD and optional M.2 support for a variety of system boot alternatives.

Specifications

  • Compute: Up to two Intel Xeon E5-2600 v3 Series, 4/6/8/10/12/14 /16 /18 cores
  • Support drives: Two hot-plug drive bays SATA/SAS/SSD
  • Storage: Standard HP Dynamic Smart Array B140i with choice of HP Smart HBA H244br or HP Smart Array P244br for performance or additional features
  • Storage: FBWC 1 GB DDR3-1,866 MHz, 72-bit wide bus at 14.9 GB/s on P244br
  • Storage battery: HP BLc 12 W Smart Storage Battery (Note: Comes standard with any SKU using the HP Smart Array P244br)
  • Networking: Choice of 2 x 10GbE, FlexFabric 10 GB, FlexFabric 10/20 GB
  • USB ports/SD/other: 1 x USB 3.0 (internal), 1 x microSD, future optional Dual microSD/future optional M.2 support
  • On-premise management: HP OneView and HP iLO Advanced for BladeSystem (Note: HP OneView support for ProLiant Gen9 in DL and BL servers only. Expected availability December 2014)
  • On-cloud management: HP Insight Online with enhanced mobile app
  • On-system management:
  • HP iLO, SPP, HP SUM, Scripting Tools (Scripting Toolkit for Linux and Windows, HP Scripting Tools for Windows PowerShell, and HP RESTful Interface Tool)
  • Power and cooling: Enclosure-based (94 percent Platinum Plus)
  • Power and cooling industry compliance: ASHRAE A3 (limited configurations)
  • Power and cooling discovery services: Enclosure-based
  • Power and cooling location discovery services: Enclosure-based
  • System ROM: UEFI or Legacy
  • Warranty: Three-year parts, three-year labor, three-year onsite

HP ProLiant DL360

The HP ProLiant DL360 Server Gen9 release delivers increased performance with the best memory and I/O expandability—packed in a dense 1U/2-socket rack design, focusing on reliability, serviceability, and continuously availability:

  • Space-constrained server workloads: Such as those used by small- to medium-sized businesses (SMBs) and service providers
  • Dynamic workloads: Such as high-performance computing, databases, and virtualized private and public cloud; all of these workloads require a top-rate balance of performance, energy efficiency, and density
  • Compute-intensive applications: Such as Big Data, analytics, seismic discovery, and more
  • Low-latency and transactional applications: Such as those used in the financial services industry

Specifications:

  • Compute: Up to two Intel Xeon E5-2600 v3 series, 4/6/8/10/12/14/16/18 cores, PCIe 3.0, up to three available slot(s)
  • Memory: HP SmartMemory (24) DDR4, up to 2,133 MHz (768 GB max)
  • Storage:
    • Standard HP Dynamic Smart Array B140i
    • Choice of HP Flexible Smart Array or HP Smart Host Bus Adapter Controllers for performance or additional features
  • Flash-backed write cache (FBWC): 2 GB DDR3-1, 866 MHz, 72-bit wide bus at 14.9 GB/s on P440ar
  • Battery: HP DL/ML/SL 96 W Smart Storage Battery
  • HP SmartDrives: 8 + 2 SFF/4 LFF max, HDD/SSD
  • Networking: 4 x 1GbE embedded + FlexibleLOM slot
  • VGA/serial/USB ports: Front VGA opt, rear VGA standard, and serial opt., 5 USB 3.0
  • GPU support: Two single-wide and active to 9.5″ in length, up to 150 W each
  • On Premise management: HP OneView and HP iLO Advanced
  • On Cloud management: HP Insight Online with enhanced mobile app
  • On System management: Changes in HP iLO, HP SUM, Intelligent Provisioning and scripting tools; plus the new UEFI and HP RESTful Interface Tool
  • Power and cooling:
    • Up to 94 percent efficient (Platinum Plus) with HP Flexible Slot FF
    • Hot plug fans with full N+1 redundancy, optional high performance fans
  • Industry compliance: ASHRAE A3 and A4, lower idle power
  • Power discovery services: Supported
  • Location discovery services: Optional
  • Form factor/Chassis depth: Rack (1U), 27.5” (SFF), 29.5” (LFF)
  • Serviceability—easy install rails: Standard
  • Warranty: 3/3/3

HP ProLiant DL380 Server

The HP ProLiant DL380 Server is designed to adapt to the needs of any environment, from large enterprise to remote office/branch office by offering enhanced reliability, serviceability, and continuous availability, backed by a comprehensive warranty.

The HP ProLiant DL380 Gen9 Server allows users to deploy a single platform to handle a wide variety of enterprise workloads, including:

  • Virtualization: Consolidate your server footprint by running multiple workloads on a single DL380
  • Big Data: Manage exponential growth in your data volumes—structured, unstructured, and semi-structured
  • Storage-centric applications: Remove bottlenecks and improve performance
  • Data warehousing/analytics: Find the information you need, when you need it, to enable better business decisions
  • Customer relationship management (CRM): Gain a 360-degree view of your data to improve customer satisfaction and loyalty
  • Enterprise resource planning (ERP): Trust the DL380 Gen9 to help you run your business in near real time
  • Virtual desktop infrastructure (VDI): Deploy remote desktop services to provide your workers with the flexibility they need to work anywhere, at any time, using almost any device
  • SAP: Streamline your business processes through consistency and real-time transparency into your end-to-end corporate data

Specifications

  • Compute: Up to two Intel® Xeon® E5-2600 v3 series, 4/6/8/10/12/14/16/18 cores; PCIe 3.0, up to six available slot(s)
  • Memory: HP SmartMemory (24) DDR4, up to 2,133 MHz (768 GB max)
  • Storage Standard HP Dynamic Smart Array B140i, choice of HP Flexible Smart Array or HP Smart SAS HBA controllers
  • FBWC: 2 GB DDR3-1,866 MHz, 72-bit wide bus at 14.9 GB/s on P440ar
  • Battery: HP DL/ML/SL 96 W Smart Storage Battery
  • HP SmartDrives: 24 + 2 SFF/12 + 3 LFF max, HDD/SSD
  • Networking: 4 x 1GbE Embedded + choice of FlexibleLOM + Standup
  • VGA/serial/USB ports/SD: Front VGA opt, rear VGA, and serial standard, 6 USB 3.0, microSD
  • GPU support: Single-/double-wide and active/passive up to 10.5″ (3)
  • On Premise management: HP OneView5 and HP iLO Advance
  • On Cloud management: HP Insight Online with enhanced mobile app
  • On System management:
  • Changes in HP iLO, HP Smart Update Manager (HP SUM), Intelligent Provisioning and scripting tools; plus the new UEFI and
  • HP RESTful Interface Tool
  • Power and cooling:
  • 94 percent efficient with Flexible Slot FF
  • Hot plug fans with full N + 1 redundancy, optional high performance fans
  • Industry compliance: ASHRAE A3 and A4, lower idle power, and ENERGY STAR
  • Power discovery services: Supported
  • Location discovery services: Optional
  • Form factor/Chassis depth: Rack (2U), 26.75″ (SFF), 28.75″ (LFF)
  • Serviceability—easy install rails: Standard
  • Warranty: 3/3/3

HP ProLiant DL160

The HP ProLiant DL160 Gen9 Server offers users a balance of performance, storage, reliability, manageability, and efficiency in a compact chassis to meets the needs of SMBs to service providers. In addition, the DL160 Gen9 Server is designed to handle a wide range of deployments, including general-purpose IT infrastructure and emerging New Style of IT workloads such as cloud and Big Data in distributed computing environments.

It is equipped with 16 DIMM slots, 94 percent efficient power supply, ASHRAE A3/A4 compliance (for higher ambient temperature support), dense 2P/1U design, and optional FlexibleLOM capability, allowing service providers to minimize the operational costs of energy and space, which is ideal today’s hyperscale environments.

  • Compute: Up to 2 Intel Xeon E5-2600 v3 Series, 4/6/8/10/12 cores PCIe 3.0, up to 3 available slot(s)
  • Memory: HP SmartMemory (16) DDR4, up to 2,133 MHz (512 GB max at launch), support for NVDIMM (third party)
  • Storage:
    • Standard HP Dynamic Smart Array B140i, optional HP Smart Array Controllers, and HP Smart HBAs via
    • PCIe stand-up cards
  • HP SmartDrives: 8 SFF/4 LFF max, HDD/SSD
  • Networking: Embedded 2x 1GbE, optional FlexibleLOM slot on riser1
  • VGA/USB ports/SD: Rear video, 3x USB 3.0 and 1x USB 2.0 (std), 1x USB 3.0 (opt of SFF models), microSD
  • On Premise management: HP OneView and HP iLO Advanced
  • On Cloud management: HP Insight Online with enhanced mobile application
  • On System management: HP iLO, HP SUM, Intelligent Provisioning and scripting tools; plus the new UEFI and HP RESTful Interface Tool
  • Power and cooling:
    • Up to 94 percent efficient (Platinum) 900 W RPS, 550 W multi-output
    • Hot swap fans with optional redundancy
  • Industry compliance: ASHRAE A3 and A4, ENERGY STAR
  • Form factor/Chassis: depth Rack (1U), 23.9″ (SFF), 23.9″ (LFF)
  • Serviceability—easy install rails: Standard
  • Warranty: 3/1/1

HP ProLiant DL180

The HP ProLiant DL180 is designed with an optimal combination of performance and affordability as well as a broad range of storage drive configurations and options offer users the flexibility and scalability need for the varied demands of 2U rack deployments and applications.

The DL180 is equipped with a range of storage configurations and options as well as storage controllers. This allows the server to support a variety of storage workloads with small to medium databases, file serving, Windows storage, and even demanding Big Data applications such as Apache Hadoop, which require the right mix of compute and storage.

Specifications

  • Compute: Up to 2 Intel® Xeon® E5-2600 v3 Series, 4/6/8/10/12 Cores, PCIe 3.0, up to 6 available slot(s)
  • Memory: HP SmartMemory (16) DDR4, up to 2133 MHz (512 GB max)
  • Storage: Standard HP Dynamic Smart Array B140i, optional HP Smart Array Controllers, and HP Smart HBAs via PCIe stand-up cards
  • Battery: HP DL/ML/SL 96 W Smart Storage Battery to support the standup controllers
  • HP SmartDrives: 16 SFF/12 LFF max, HDD/SSD
  • Networking: Embedded 2x 1GbE, optional FlexibleLOM slot on riser
  • VGA/Serial/USB Ports/SD: 1 VGA, 1 Serial, 6 USB 3.0, 1 microSD
  • GPU Support: Single-Wide and Active (1)
  • On Premise management: HP OneView2 and HP iLO Advanced
  • On Cloud management: HP Insight Online with enhanced mobile application
  • On System management: HP iLO, HP SUM, Intelligent Provisioning and scripting tools; plus the new UEFI and HP RESTful Interface Tool
  • Power and cooling:
    • Up to 94 percent efficient (Platinum), 550 W multi-output, 900 W RPS1
    • Hot swap fans with optional redundancy
  • Industry compliance: ASHRAE A3 and A4, ENERGY STAR®1
  • Location Discovery Services: Optional
  • Form factor/Chassis Depth: Rack (2U), 23.9″ (SFF), 23.9″ (LFF)
  • Serviceability—easy install rails: Standard
  • Warranty: 3/1/1

HP ProLiant ML350

HP ProLiant ML350 Gen9 right-fit server delivers a class-leading combination of performance, availability, expandability, manageability, reliability, and serviceability.  It also offers standard HP Integrated Lights-Out (iLO) capabilities for simplified IT infrastructure management, 24 slots for DDR4 HP SmartMemory with up to 14 percent greater performance (a maximum of 48 drives), and an embedded 4x 1GbE NIC, making it the ideal growing businesses.

The HP ProLiant ML350 Gen9 Server leverages the latest Intel Xeon E5-2600 v3 processors with up to 70 percent performance gain as well as additional support for 12 GB/s serial-attached SCSI (SAS) with a broad range of graphics and compute options. HP ProLiant ML350 Gen9 Server can be managed in any IT environment by automating the most essential server lifecycle management tasks: deploy, update, monitor, and maintain.

Specifications

  • Compute: Up to two Intel Xeon E5-2600 v3 series, 4/6/8/10/12/14/16/18 cores. PCIe 3.0, up to nine available slot(s)
  • Memory: HP SmartMemory (24) DDR4, up to 2,133 MHz (768 GB max.)
  • Storage:Standard HP Dynamic Smart Array B140i Choice of HP Flexible Smart Array or HP Smart Host Bus Adapter Controllers for performance or additional features
  • HP Smart Drives: 48 SFF/24 LFF max., hard disk drive (HDD)/solid-state drive (SSD)
  • Networking: 4x 1GbE embedded + Standup
  • VGA/serial/USB ports/SD: Front VGA opt., rear VGA and Serial Standard, eight USB, and one microSD
  • GPU support: Single/double-wide and active/passive, up to 10.5″ (4)
  • On premise management: HP Insight Control and HP iLO Advanced
  • On cloud management: HP Insight Online with enhanced mobile application
  • On system management: Changes in HP iLO, HP SUM, Intelligent Provisioning and scripting tools; plus the new UEFI and HP RESTful Interface Tool
  • Power and cooling: Up to 94 percent efficient (Platinum Plus) with Flexible Slot FF
  • Industry compliance: ASHRAE A3 and A4, lower idle power, and ENERGY STAR®
  • Form factor/Chassis: depth Tower or Rack (5U)/28.5″ (SFF), 28.5″ (LFF)
  • Warranty: 3/3/3

HP ProLiant XL230a Server/HP Apollo 6000

The HP Apollo 6000 addresses the growing demand for high-performance computing (HPC) as well as efficiency gives users the flexibility that leads to savings:

  • Per core: The ProLiant XL220a Server tray has two 1P servers per tray with Intel Xeon E3-1200 v3 series processors with up to four cores, increasing performance per core up to 35 percent for single threaded applications over a 2P blade.
  • The ProLiant XL230a Gen9 Server tray has one 2P server per tray with high performance Intel Xeon E5-2600 v3 series processors with up to 70 percent more processor performance and up to 36 percent more efficiency than the previous generation.
  • Per watt: The HP Apollo 6000 Power Shelf supports up to six chassis, and the HP Advanced Power Manager dynamically monitors and manages power to save on energy.
  • Per square foot: With 10 slots for server, storage, and/or accelerator trays per 5U chassis, you can fit up to 160 servers in one 48U rack, using 60 percent less space than competing  blades.
  • With flexibility: The HP Apollo 6000 System accommodates up to 20 servers in the space of five traditional servers (5U), powering up to 120 servers with a single power shelf. The HP Innovation Zone also allows for FlexibleLOM options to fit your workload needs.
  • With savings: The HP ProLiant XL220a Server is a great fit for single-threaded workloads such as electronic design automation (EDA), while the new HP ProLiant XL230a Server is a great fit for workloads such as seismic processing or virtualized hosting. Take advantage of compute, storage, and accelerator tray options as they become available in the same modular HP Apollo a6000 Chassis.

Specifications

  • Form factor:
    • 5U (H) x 4.33 cm (W) x 70.79 cm (D)
    • 5U (H) x 1.70 in (W) x 27.87 in (D)
  • Processor family: Intel Xeon E5-2600 v3 series
  • Processor cores available: 6/8/10/12/14/16
  • Chipset: Intel C612 series chipset
  • Number of processors: 2
  • Max processor speed: 2.6 GHz
  • Drive description: 4 SFF SAS/SATA/SSD
  • Supported drives: Hot-plug 2.5-inch SAS/SATA/SSD
  • Memory slots: 16 DIMM slots
  • Memory max: 512 GB (16 x 32 GB)
  • Memory type, ECC: DDR4; R-DIMM; 2133 MT/s
  • Network options:
    • Network module supporting various FlexibleLOMs:
    • 1 GbE, 10 GbE, and/or InfiniBand
    • One HP Dynamic Smart Array B140i SATA controller
    • HP Smart Array P430/2G and 4G controller
    • HP H220 Host Bus Adapter
  • Expansion slots: One PCIe x16 Gen3, half-height
  • USB ports/SD:  One serial/USB/video port, MicroSD
  • Management
    • HP iLO (Firmware: HP iLO 4)
    • Advanced Power Manager
  • OS support:
    • Microsoft Windows Server
    • Red Hat Enterprise Linux
    • SUSE Linux Enterprise Server

HP Apollo 8000 / HP ProLiant XL730f Server

The HP Apollo 8000 System allows users take advantage of higher performance components. In addition, heat extraction is closer to the processor to improve computational performance capabilities. This allows extremely dense configurations that offer hundreds of teraflops of compute power in a very compact space with up to 80 kW of power (4 x 30A 3ph 480AC) and support for up to 144 servers per rack.

With its liquid cooling, it enables the higher-performance of components and allows users to use the heat transferred to the water for facilities heat, which reduces costs and carbon footprint. Other HP innovations include a power distribution system that exceeds Energy Star Platinum certification, and the HP Apollo 8000 intelligent Cooling Distribution Unit (iCDU) Rack that’s more capable than competing solutions.

HP Apollo f8000 Rack Specifications

  • Server: Each rack supports up to 72 HP ProLiant XL730f Gen9 Server trays (two nodes per tray)
  • Networking: Each rack supports a total of eight HP InfiniBand switches
  • Power: 80 kW input power per rack ships standard with N+1 or N+N redundancy support depending on configuration of the servers
  • Input: 380–415 VAC for international standards and 480 VAC for North American standards (4 x 30A power cords per rack)
  • Management: HP Apollo 8000 System Manager
  • HP iLO Management Engine (iLO 4 v2.00)
  • Rack level HP iLO network consolidation
  • Typical configuration: 72 HP ProLiant XL730f Gen9 Server trays and eight HP InfiniBand switches, 16 Ethernet SFP+ cable kits, associated rack
  • plumbing kit, and utility module (includes HP Apollo 8000 System Manager, 2 x 40 KW power shelves)
  • Weight: 4,700 pounds (or 2,132 kg) max
  • 2,914 pounds (1,322 kg) max with no server trays
  • Dimensions (WxDxH): 24 in x 56.18 in x 94 in (607 mm x 1427 mm x 2,382 mm)

HP Apollo 8000 iCDU Rack Specifications

  • Cooling: An iCDU rack supports a maximum of 320 kW or up to four HP Apollo f8000 racks
  • Power Input: 380–415 VAC for international standards and 480 VAC for NA standards (1 x 30A power cord per rack)
  • Management: HP Apollo 8000 System Manager
  • Redundancy: Supports N, N+N redundancy
  • Configuration
    • Each iCDU rack ships with one CDU at the bottom of the rack and associated rack plumbing kit. Also, the iCDU rack is configurable to add 48-port HP 5900 Ethernet switches.
    • Secondary plumbing kit is ordered one for every three racks (f8000 and iCDU) in the solution.
    • Optional IT equipment may be added to the top half of iCDU provided power and cooling requirements for additional IT are supplied
  • IT equipment: 26U of standard 19” rack space for network switches or server nodes
  • Weight: 2,188 pounds (993 kg) with no hose kits or IT equipment installed
  • Dimensions (WxDxH): 24 in x 57 in x 94 in (607 mm x 1427 mm x 2,382 mm)

HP ProLiant XL730f Gen9 Server Specifications

  • Server: Each XL730f tray comes standard with two 2P servers
  • CPU: Intel Xeon E5-2600 series, E5-2695v3, E5-2690v3, E5-2680v3, E5-2670v3, and E5-2683v3
  • Memory: 16 DIMMs per server, max 256 GB HP DDR4 SmartMemory 2,133 MT/s
  • Network: Integrated NIC: Single port 1 GbE per server
  • InfiniBand Adaptor Kit: Single ConnectX-3 Pro InfiniBand FDR port per server
  • Storage One small form factor (SFF) SSD per server
  • Supports 80 GB, 120 GB, 240 GB, 480 GB, and 1.6 TB SSD
  • Boot: SSD and network
  • Minimum configuration: Two CPUs per server, single InfiniBand FDR adaptor, two DIMMs per CPU (up to eight DIMMs max)
  • Power: Max of 1,200 W of HVDC to 12V conversion per ProLiant XL730f Gen9 Server tray
  • Management:
    • HP Apollo 8000 System Manager
    • HP iLO Management Engine (iLO 4) – dedicated iLO network support
    • HP Advanced Power Manager
    • HP Insight Cluster Management Utility
  • OS RHEL, SLES, and CentOS