Analysis. Micron’s decision, announced yesterday, to scrap 3D XPoint development and sell its Lehi fab, which makes XPoint chips for itself and Intel, has thrown a giant spanner in the Optane works. Does storage-class memory have a future?
3D XPoint Optane Persistent memory (PMEM) acts as a means of enlarging memory capacity by adding a slow Optane PMEM tier to a processor’s DRAM. This shortens application execution time by reducing the number of IOs to much slower storage. The technology works but is difficult to engineer, which is why it has taken Intel much of the five years since Optane’s launch in 2015 to build a roster of enterprise software firms that support Optane PMEM.
Take-up has not been helped by Intel’s treatment of Optane as a proprietary technology with a closed interface to specific Xeon CPUs. There is no Optane support for AMD or Arm CPUs which would enlarge the Optane PMEM market – but at the cost of Xeon processor sales.
Micron has decided that, in the wake of the rise of GPU-style workloads such as graphics, gene sequencing, AI and machine learning, the overarching need is for more memory bandwidth from CPUs, GPUs and other accelerators to a shared and coherent memory pool. This is different from the Optane presumption that CPUs are limited by memory capacity.
The Compute Express Link (CXL) is the industry-standard way to link processors of various kinds to a shared memory pool. Micron has said it supports CXL and will develop memory products that use it.
In the Micron worldview, Optane’s role would be as a CXL-connected storage-class memory pool. Other storage-class memory products, such as Everspin’s STT-MRAM, will also likely need to support CXL in order to progress in the new CPU-GPU-shared memory processing environment. That is, if SCM has a role at all.
SCM’s role
Storage-class memory occupies a price performance gap between faster and higher-priced DRAM and slower and lower-priced NAND flash. Its problem has been that in SSD form it is seen as too expensive for the speed increase it provides. In PMEM (DIMM) form it is too expensive and needs complex supporting software, making it a relatively poor DRAM substitute. No-one would use Optane PMEM if DRAM was (a) more affordable and (b) more of it could be attached to a CPU.
As the world of processing moves from a CPU-only to a twin multi-CPU and multi-GPU model, memory needs to be sharable between all these processors. That requires a different connectivity method than the classic legacy CPU socket method. High-bandwidth memory (HBM) stacks memory dies above an interposer card which connects with a CPU. It is not much of a stretch to envisage HBM pools connected to CPUs and GPUs across a CXL fabric.
There are several SCM suppliers, none of which have made much progress compared to Intel’s Optane. Samsung’s Z-NAND is basically a faster SSD. Everspin’s STT-MRAM is seen as a potential DRAM replacement and not a subsidiary, slower tier of memory to DRAM; that’s Optane’s role. Spin Memory’s MRAM is in early development. Weebit Nano’s ReRAM is also in relatively early development.
It has taken Intel five years to get to the point where it still doesn’t have enough software support to drive mass Optane PMEM adoption – which shows that these small startups face a monumental problem.
The lesson of Optane PMEM is that all these technologies will need complex system and application software support and hardware connectivity if they are to work alongside DRAM.
Perhaps the real problem is that there is no storage-class memory market. The CPU/GPU connectivity and software implementation problems are so great as to deny any candidate technology market headroom.
Micron has judged that the SCM game is not worth the candle. Intel now has to decide if it should go it alone. It could double down its Optane investment, by buying Micron’s Lehi fab, or it could decide to spend its Optane and 3D XPoint development dollars elsewhere.