Life after PCIe. Intel gang backs Compute Express Link (CXL)

Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel, and Microsoft are working together on Compute Express Link (CXL), a new high-speed CPU-to-device and CPU-to-memory interconnect  technology.

The CXL consortium is the fourth high-speed CPU-to-device interconnect group to spring into existence. The others are the Gen-Z Consortium, OpenCAPI and CCIX. They are all developing open post-PCIe CPU-to-accelerator networking and memory pooling technology.

Unlike the others, CXL has the full backing of Intel. Blocks & Files suggests that the sooner CCIX, Gen-Z and OpenCAPI combine the better for them. An Intel steamroller is coming their way and a single body will be harder to squash.

CXL

The CXL group is incorporating as an open standard body, a v2.0 spec is in the works and “efforts are now underway to create innovative usages that leverage CXL technology.”

CXL benefits include resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. We might expect CXL-compliant chipsets next year and servers and accelerators using them to roll out in 2020.

The CXL consortum aims to accelerate workloads such as AI and machine learning, rich media services, high performance computing and cloud applications. CXL does this by maintaining memory coherency between the CPU memory space and memory on attached devices.

Devices include GPUs, FPGAs, ASCs and other purpose-built accelerators, and the technology is built upon PCIe, specifically the PCIe 5.0 physical and electrical interface. That implies up to 128GB/s using 16 lanes.

The spec covers an IO protocol, a memory protocol to allow a host to share memory with an accelerator, and a coherency interface

V1.0 of the CXP specification has been ratified by the group and applies to CPU-device linkage and memory coherency for data-intensive applications. It is said to be an open specification, aimed at encouraging an ecosystem for data centre accelerators and other high-speed enhancements.

If you join the consortium you get a copy of the spec. The listed members above are called a funding promoter group. There is a CXL website with a contact form.

PCIe post modernism

The Gen-Z Consortium is working on pooled memory shared by processors, accelerators and network interface devices. Cisco, Dell EMC, Google, HPE, Huawei, and Microsoftare among dozens of members. Notable absentees includ Alibaba, Facebook and Intel.

OpenCAPI (Open Coherent Accelerator Processor Interface) was set up in 2016 by AMD, Google, IBM, Mellanox and Micron. Other members include Dell EMC, HPE, Nvidia and Xilinx. Intel was not a member and OpenCAPI was viewed as an anti-Intel group driven by and supporting IBM ideas. Check out the OpenCAPI website for more information.

Gen-Z and OpenCAPI have been perceived as anti-Intel in the sense that they want an open CPU-accelerator device memory pool and linkage spec, and not have the field dominated by Intel’s own QuickPath Interconnect (QPI.)

The CCIX (Cache Coherent Interconnect for Accelerators) group was founded in January 2016 by AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, and Xilinx – but not Nvidia or Intel-. Its goal is to devise a cache coherent interconnect fabric to exchange data and share main memory between CPUs, accelerators such as FPGAs and GPUs, and network adapters. CCIX also has an anti-Intel air about it.

All this implies that the CXL group is primarily an Intel-driven grouping and set up in opposition to CCIX, Gen-Z and OpenCAPI.