Dateline: April 1, 2019. Researchers at a secret Google facility have implemented multi-level cell DRAM, demonstrating 2 bits/cell (MLC) with a roadmap to 3 and 4 bits/cell, equivalent to MLC, TLC and QLC NAND.
MLC DRAM has double the density of single bit DRAM, enabling a doubling of DIMM density to 512GB from the current 256GB. There is an increase in access latency, from DDR4 DRAM’s 15ns to 45ns but no decrease in endurance.
As soon as an application with a working set in excess of 3TB can fit in memory then the longer MLC DRAM access time becomes moot as IOs to storage are avoided, saving much more time than that needed for extended MLC DRAM latency.
A single Optane SSD access can take 10,000ns, 220 times longer than MLC DRAM access. An NAND SSD accessed with NVMe can take 30,000ns per access, 660 times longer than MLC DRAM. A 6TB working set application running in a 3TB DRAM system and needing 20,000 storage IOs executed in 300,000,000ns. When run in a 6TB MLC DRAM system with the same Xeon SP CPU it took just 50,000ns; 6,000 times faster.
The impact of this on machine learning model training and Big Data analytics runs will be incalculable, according to Alibaba Rahamabahini, AI Scientist Emeritus at Google’s Area 51 Moonshot facility; “With this kind of moonshot thinking around AI and machine learning, I believe we can improve millions of lives. It’s impossible to understate the positive effects of this game-changing development on science and technology, with the benefit of improving human life. This outsized improvement has inspired me, and I’m tremendously excited about what’s coming, such as actually delivering fully autonomous vehicles and real time cancer diagnoses.”
The technology borrows word and bit line technology from NAND without affecting the byte addressability of DRAM.
Four different voltage levels are needed to enable 2 bits per cell. Although there is more complex sensing and refresh circuitry, the fabrication process requires no extra steps so the DRAM foundries see a doubling of density with no manufacturing cost increase. DRAM controllers need extra firmware and a few extra fabrication steps. System builders should expect to see an at least effective 50 per cut in their DRAM prices, possibly more.
Arvinograda Mylevsky, lead researcher at Google’s Area 51 Moonshot facility, said: “For far too long DRAM expense has limited application working set sizes and blighted the industry with its need to use external storage. MLC DRAM, combined with Persistent Memory, will relegate SSD and disk drive storage to archive store status, freeing modern AI, machine learning and analytics from the shackles of out-dated, decades-old technology.”
Google is contributing the technology to the Open Compute project, and making it available on an open source basis to all DRAM semi-conductor foundry operators.