Seagate and ZeroPoint Technologies demonstrated hardware-accelerated compression within a CXL memory tier at the OCP Global Summit in San Jose, CA.
How could disk drive supplier Seagate utilise ZeroPoint’s HW-accelerated memory compression technology within Compute Express Link (CXL) memory tiers? Is it envisaging a JBOD chassis, full of Exos disk drives, with a CXL memory tier in its controller?
It turns out that Seagate is developing a Composable Memory Appliance (CME) and it ran a Multi-Headed Composable Memory Appliance Next-Gen session at the OCP summit. We understand that this refers to a rack-mounted CMA appliance connected to and supporting multiple accessing end-points simultaneously and using CXL memory technology.
Seagate has developed a Composable Fabric Manager (CFM) for its CMA device, with code available on Github. This “provides a client interface for interacting with a Composable Memory Appliance. It provides a north-side (frontend) OpenAPI interface for client(s) and a south-side (backend) Redfish interface to manage available Composable Memory Appliances (CMAs) and CXL Hosts.”

A Composable Memory Fabric Management Software and APIs Architecture document, authored by Seagate staff, is available on the OCP website. This “defines the minimum APIs used for managing composable memory fabric system backed by OCP-compliant composable memory appliance[s].” The CMA is “a scalable memory device based on CXL technology.”
The document says: “A memory appliance contains multiple memory blades, and each blade can individually connect to multiple CXL host servers.”We understand the CMA acts as a CXL Type 3 memory expander, allowing connected servers to access the shared, scalable memory resources over a CXL fabric rather than relying solely on local DRAM.
The ZeroPoint DenseMem technology is used to create compressed memory tiers, boosting capacity by 1.85x to 2.25x and so cutting DRAM costs.
The document does not specify the persistent storage devices to which the memory blades are attached, be they Seagate Nytro SSDs or Exos HDDs. We have asked Seagate and it replied: “As this is not a product release but more of a research effort exploring the possibilities around Memory disaggregation there is nothing much we can add at this time on the top of what was issued in Zeropoint press release”.

ZeroPoint says its DenseMem technology “increases effective CXL Type 3 Device memory capacity by a factor of 2-3x through transparent, in-line memory compression/decompression with minimal impact to latency and bandwidth. DenseMem is available as an area and power efficient drag and drop IP block portable across the latest process nodes.”
In other words, it gets incorporated into a CMA device controller chip.
DenseMem can be integrated into the CXL Type 3 device SoC, between the CXL controller and memory controller logic blocks. It provides a compressed memory tier instantiated automatically inside a CXL Type 3 device, featuring real-time compression/decompression coupled with compaction, and transparent memory management. Operations take place at main memory speed and throughput.
As we understand it, an external JBOF would currently send data to a GPU server via NVMe and RDMA. By using this CMA tech, a Seagate JBOF, or JBOD, could communicate with it by using CXL memory pooling and sharing, which would have a lower latency. Bandwidth would be raised by having multiple CMA devices.
The loading of data from the CMA’s persistent storage into its CXL memory would effectively be caching, we assume, with large items sharded across drives to increase bandwidth at this level