Sapphire Rapids Xeons with HBM could let DAOS reign

Coming gen-4 Xeon processors will use high-bandwidth memory (HBM) to speed processing, with Intel also announcing Ponte Vecchio GPU validation, Ethernet HPC use, and commercial support for DAOS object storage.

Intel announced these moves at the 2021 International Supercomputing Conference (ISC).

Trish Damkroger, VP and GM of High Performance Computing at Intel, presented the Sapphire news at ISC and provided a quote: “Intel is the driving force behind the industry’s move toward exascale computing, and the advancements we’re delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realisation.”

It’s arguable whether Intel is the driving force for exascale computing — AMD supplies processors for the El Capitan exascale supercomputer and Arm processors are used in Fujitsu’s Post-K machine. But Intel is certainly driving hard for exascale-class computing.


Catch-up mode Intel is pushing ahead with HBM and saying that the next Xeon SP generation — gen-4 we assume, after the gen-3 Ice Lake systems — will be called Sapphire Rapids. HBM is a design using stacked DRAM chips connected to a base interposer layer which links to a CPU and doesn’t use the traditional memory channel/socket architecture. That architecture is limited by the number of supported sockets — eight, in the case of gen-3 Ice Lake Xeons.

High Bandwidth Memory scheme.

Sapphire Rapids Xeons will get a substantial boost in memory bandwidth by using HBM, so memory bandwidth-sensitive workloads should execute more quickly. Intel says ”Users can power through workloads using just High Bandwidth Memory or in combination with DDR5.”

The US Department of Energy’s Aurora supercomputer at Argonne National Laboratory and the Crossroads supercomputer at Los Alamos National Laboratory will both use HBM-assisted Sapphire Rapids Xeons.

However, capacity and performance numbers were absent from the Intel pitch. Intel does not make DRAM, so it will have to buy in chips — possibly HBM 2e chips from SK Hynix — and maybe the interposer, and assemble the HBM itself. An eight-socket Ice Lake could have around 200GB/sec of memory bandwidth. Sapphire Rapids with HBM might have a 3x to 6x advance on that. We just have to wait and see. Intel will launch an HBM variant of Sapphire Rapids in late 2022, after the main Sapphire Rapids launch.

The HBM could be designed for use as a level-4 cache, layered above a pool of DDR5 main memory. How it might interoperate with Optane persistent memory is an interesting question. Conceivably we might see a memory hierarchy involving the CPU’s caches, L4 HBM cache, Optane PMem cache (L5) and then DDR5 DRAM. Would Optane PMem even be needed if a L4 HBM cache existed above main, socketed, DRAM?

PCIe gen-5

Sapphire Rapids will also support PCIe gen-5, double PCIe gen-4 speed, and the CXL v1.1 interconnect, which will accelerate storage drive IO as well as increasing memory capacity. When such systems become available for enterprise on-premises systems, users should see a significant performance boost over gen-3 Xeon servers using the PCIe gen-4 bus, with PCIe gen-3 bus systems blown away.

The Ponte Vecchio Intel GPU will also use HBM, with Intel saying Ponte Vecchio will be available in an OCP Accelerator Module (OAM) form factor and subsystems serving the scale-up and scale-out capabilities required for HPC applications.

Intel claims its Ethernet 800 series network adapters and switches, with 100Gbit/sec speed, will match InfiniBand performance at much lower cost.

Let DAOS reign

DAOS — Distributed Application Object Storage — is Intel’s Optane-using, open-source, parallel file system for high-performance computing and a successor for Intel to the Lustre. Intel will now provide commercial support for DAOS to partners as an L3 support offering, which enables partners to provide a turnkey storage solution by combining it with their own services.

Such partners include HPE, Lenovo, Supermicro, Brightskies, Croit, Nettrix, Quanta, and RSC Group.