Home Blog Page 407

Datrium out-guns VMAX, XtremIO and FlashArray all-flash boxes

Hyper- or hybrid converged system supplier Datrium’s DVX storage and compute system out-performs Dell EMC’s VMAX and XtremIO all-flash arrays as well as Pure Storage’s FlashArrays.

Datrium wants to be seen as a converged enterprise storage and compute systems supplier, and says it converges five use cases onto its DVX system; primary storage, backup, encryption, mobility and policies. These, it asserts, are traditionally stored and processed in different silos, with different suppliers’ products, be they on-premises or in the cloud.

The DVX platform can converge these. Datrium has a distinctly enterprise focus, saying it can support remote and branch offices and small departments; the traditional hyper-converged infrastructure (HCI) market, but its focus is on the enterprise.

In that case it has to do well at the basic enterprise primary storage game, where capacity, performance and low latency count. Has it got the chops for this?

According to its internal tests using IOMark it does.

Datrium compared its results with publicly-available information about Dell EMC and Pure Storage arrays and came up with these comparisons;

It beat Dell EMC’s VMAX 950F and XtremIO X2 arrays and the Pure Storage Flash Array//m70 and //x70 in a range of mixed random read and write workloads and bandwidth tests. Pretty convincing.

The business’ products are aimed at hybrid multi-clouds with, Datrium says, the same customer experience whether it be on-premises or in the cloud.

Expansion into Europe

Datrium CEO Tim Page says his firm comes out top in 100 per cent of the customer POCs (Proof of Concepts) its entered. Its last two quarters have been knockouts in terms of revenue and the firm is expand outside its North America heartland with Europe its first port of call.

Sean Mullins, ex-Dell EMC, has been appointed as VP of Sales in Europe.

 It will focus on building out its UK presence this year—hiring and training new sales staff and developing relationships with organisations and partners throughout the region—with more ambitious expansion plans beginning in January 2020.

Quantum jumps onto all-flash NVM Express drive and fabric train

File manager Quantum has jumped aboard the NVM Express drive and fabric access train with a 2U, 24-slotter box and new software to accelerate users’ access to its StorNext files.

StorNext is a scale out, multi-tiered file data management product, including software and hardware such as the Xcellis arrays and Lattus object storage. It’s popular with Media and Entertainment (M&E) customers. NVMe drive support came with was V6.2 of StorNext in September last year.

The new F-Series NVMe arrays uses new software; Quantum’s Cloud Storage Platform. Jamie Lerner, Quantum’s President and CEO, said: “This platform is a stepping stone for us, and for our customers, to move to a more software-defined, hyper-converged architecture, and is at the core of additional products we will be introducing later this year.” 

He declared: “This is the most significant product launch we’ve done in years.”

F2000 diagram

Quantum’s F2000 array is the first product in an F-Series and, as the diagram above indicates, can hold up to 184TB of data in its active:active, dual-controller configuration using off-the-shelf hardware. It uses dual-ported drives for added reliability.

The software stack is  Quantum’s Cloud Storage Platform, offering block access and tuned for video and video-like data to maximise streaming performance. It supports NVMe-oF access from host application servers as well as Fibre Channel, iSCSI, iSER and RDMA.

Quantum says it’s tightly integrated with StorNext and designed for M and E activities such as;

  • Post-production: real-time editing of 4K and 8K content.
  • Sports video: coping with tens to hundreds of cameras generating multiple concurrent ingest streams and playing the content in real-time.
  • Rendering and simulation: deployable in animation and visual effects studios needing high IOPs and low-latencies between large-scale render farms and the storage subsystem.

StorNext customers should see an immediate uplift in data access speed and lower latencies, along with reduced rack space needs compared to disk-based arrays.

Tom Coughlin, the President of, Coughlin Associates, provided a supportive quote: “Combined with StorNext and its out-of-band metadata management, the F-Series provides storage tiering, content protection and diverse software defined management capabilities that should be well received by the media and entertainment industry.”

Blocks & Files thinks we can expect hyper-converged Quantum products to arrive later this year. This NVMe drive and fabric all-flash array catapults Quantum into the modern storage array age and should be a building block for even more development. Storage-class memory anyone?

IBM refreshes Storwize V5000 array line-up

IBM has replaced its five Storwize V5000 arrays with four faster, bigger and cheaper models.

V5000E

The V5000 and V7000 products are classic dual-controller storage arrays which use IBMs Spectrum Virtualise software to provide a pool of SAN (block access) storage. They come in a 2U x 24 2.5-inch slot base chassis.

The old and new model ranges look like this;

As you can see the V5000E and V5100 replaces the original V5000 line. The table below shows the feature spec of old and new.

Bold text highlights changes and yellow columns indicate new products

The V5010E, with E indicating Extended, replaces the V5010 and V5020. The V5030E replaces the V5030 and all-flash V5030F.

V5100

A new V5100 line which pushes the V5000E range closer to the high-end V7000s. It looks like this:

FCMs are IBM’s proprietary FlashCore Modules, its own design SSDs. The V5100F is the all-flash mode.

These have considerably more cache memory than the V5000Es and can support NVMe drives and NVMe over Fibre Channel to provide low-latency and fast access to data. IBM said the V5100s are ready for storage-class memory and 32Gbit/s Fibre Channel.

The V5100s can have two nodes in a cluster while the V7000s can cluster up to four nodes together. If you can bear looking at another table here is one showing some of their features:

Updated Storwize V7000 table.

Update: Blocks & Files was told by Eric Herzog that; ” The Storwize V7000 Gen3 introduced in October ’18 was also updated, along with the new Storwize 5000/5100 family announced 2 April, with the same 30.72TB SSDs, the 14TB HDDs, the 25Gbit/s Ethernet and the 32Gbit/s FC.”

Storwize management

All the Storwize arrays are managed through IBM’s Storage Insights, a cloud-based management and support tool, which provides analytics and automation.

Bullet points from IBM’s Storwize update briefing deck:

  • V5010E has 4x more cache than the V5010, 2x its IOPS, and scales to 12PB maximum flash capacity.
  • V5010E is 30 per cent cheaper than the V5010 and can be upgraded to the V5030E. 
  • V5030E has compression and deduplication. It scales to 23PB of all-flash storage in a single system; 32PB with 2-way clustering.
  • V5030E has a 30 per cent lower street price than the V5030 and 20 per cent more IOPS.
  • V5100s can have 2PB of flash in a 2U chassis, and scale out like the V5030F; 23PB in a single system or 32PB with 2-way clustering.
  • V5100s have 9x more cache than the V5030, and pump out 2.4x more IOPS than the V5030F (with data reduction), while costing 10 per cent more.

All-in-all, the refreshed V5000E and V5100s, and updated V7000s go faster than the previous models, store more data and enable users to get at it faster for less cost.

Intel announces Optane DIMM support with Gen 2 Xeon SP processors and QLC ruler

As expected Intel has updated its Xeon processor line and shone a bright light on Optane DIMM (DC Persistent Memory) general availability.

It’s also announced a dual-port Optane SSD and a QLC (4bits/cell) ruler format drive.

Gen 2 Xeons

Intel has updated its Xeon CPU line to the Gen 2 Xeon Scalable Processors (SP) which were called the Cascade Lake line. The top end is the Platinum 9200 with up to 56 cores; 112 in a 2-socket system. There can be 12 channels of memory support with up to 4.5TB of Optane PMEM per socket.

Intel says there can be up to 36TB of system-level memory capacity in an 8-socket system when combined with traditional DRAM. That’s twice the normal (DRAM-only) system memory.

The Platinum 8200 has up to 28 cores and comes in 2, 4 and 8+ socket configurations. There are also Gold 6200s (to 24 cores), Gold 5200s (to 18 cores), Silver 4200s (12 cores) and Bronze 3200 series CPUs (to 8 cores).

Customers have to buy the Optane DIMM and then pay again for Optane support in the Xeon CPU. For example;

A Xeon Platinum 8280L can support 3TB of Optane and 1.5TB of DDR4 DRAM; that’s 4.5TB memory support, and it costs $17,906 whereas an 8280 with 1.5TB DRAM support only costs $10,009. The Optane costs an additional $7,897; eye-watering. Anandtech has more price and configuration details.

Intel’s also announced Xeon D-1600 SoC processors for network edge use. They have up to 8 cores.

It’s new Agilex FPGAs, built with 10nm process technology, also support Optane DC Persistent Memory and the Compute Express Link (CXL), Intel’s latest cache and memory coherent interconnect.

Optane DIMM endurance

Intel is still not releasing detailed Optane DIMM performance numbers. A Storage Review article says the 256GB module has an over 350PBW rating for 5 years at a 15W power level. A chart illustrates this and indicates a 360-370 PBW value.

Assuming a 360PBW value, that’s 72PB/year, 197.26TB/day or 770.5 write cycles a day, 281,250/year, and 1,406,250 over 5 years.

Dual-port Optane SSD and QLC ruler

Joining the single port P4800X is the dual-port Optane DC SSD D4800X (NVMe).  Intel says it delivers 9x faster read latency compared to a NAND dual port SSD, under write pressure. No detailed performance numbers were provided.

Intel has also announced a QLC (4 bits/cell) 64-layer 3D NAND SSD, the D5-P4326  which uses the roughly 12-inch EDSFF ruler format to offer 15.36 and 30.72TB capacities. The 15.36TB capacity is also available in a U.2 form factor product. The 30.72TB ruler can enable 1PB of storage in a 1U chassis.

The random read/write IOPS are 580,000/11,000 and the sequential bandwidth is 3.2/1.6 GB/sec. Average read latency is 137μs. Its endurance is 0.18 drive writes per day (DWPD) for random I/O, and 0.9 DWPD for sequential I/O.

The claim is it enables HDD and TLC SSD replacement in warm storage. On eBay a 15.56TB ruler costs $3,920.39; $251.95/TB and $0.25/GB.

Intel QLC ruler

They join Intel’s existing 7.68TB QLC P4320 drive.

Gen 2 Xeon and Optane DIMM ecosystem

A raft of server and other suppliers is supporting the Gen 2 Xeons and Optane DIMMs; Cisco, Fujitsu, Supermicro and VMware to name a few.

VMware is supporting Optane DIMMs in “App-Direct” mode as well as “Memory” mode. vSphere is certifying the following maximum capacity for vSphere 6.7 release as follows:

  • Up to 6TB for 2-socket and 12TB for 4-socket of Optane  DC Persistent Memory in memory mode
  • A combination of DRAM and Optane DC Persistent Memory in App-Direct mode with a combined limit of 6TB for 2-socket and 12TB for 4-socket

VMware vSAN will not have support for “App-Direct” mode at this time as cache or capacity tier devices of vSAN. Expect more developments to come.

Blocks & Files welcomes the dawning of the persistent memory era and notes that system and application software developments are needed before IT users will see the speed benefits. Bring it on.

MemVerge virtualizes DRAM and Optane DIMMs into single clustered memory pool

Startup MemVerge is developing software to combine DRAM and Optane DIMM persistent memory into a single clustered storage pool for existing applications to use with no code changes.

It says it is defeating the memory bottleneck by breaking the barriers between memory and storage, enabling applications to run much faster. They can sidestep storage IO by taking advantage of Optane DIMMs (3D XPoint persistent memory). MemVerge software provides a distributed storage system that accelerates latency and bandwidth.

Data-intensive workloads such as AI, machine learning (ML), big data analytics, IoT and data warehousing can use this to run at memory speed. According to MemVerge, random access is as fast as sequential access with its tech – lots of small files can be accessed as fast as accessing data from a few large files.

The name ‘MemVerge’ indicates converged memory and its patent-pending MCI (Memory Converged Infrastructure) software combines two dissimilar memory types across a scale-out cluster of server nodes. Its system takes advantage of Intel’s Optane DC Persistent memory – Optane DIMMs – and the support from Gen 2 Xeon SP processors to provide servers with increased memory footprints.

MemVerge discussed its developing technology at the Persistent Memory summit in January 2019 and is now in a position to reveal more details.

Technology

The technology clusters local DRAM and Optane DIMM persistent memory in a set of clustered servers presenting a single pool of virtualised memory in which it stores Distributed Memory Objects (DMOs). Data is replicated between the nodes using a patent-pending algorithm. 

MemVerge software has what the company calls a MemVerge Distributed File System (MVFS.)

The servers will be interconnected with RDMA over Converged Ethernet (RoCE) links. It is not a distributed storage memory system with cache coherence.

DMO has a global namespace and provides memory and storage services that do not require application programming model changes. It supports memory and storage APIs with simultaneous access.

MemVerge concept

Its software can run in hyperconverged infrastructure (HCI) servers or in a group of servers representing an eternal, shared, distributed memory pool; memory constituting the DRAM+ Optane DIMM combination. However, in both cases applications up the stack “see” a single pool of memory/storage.

Charles Fan, CEO, says MemVerge is essentially taking the VSAN abstraction layer lesson and applying it to 3D XPoint DIMMs.

The Tencent partnership, described here, involved 2,000 Spark compute nodes and MemVerge running as their external store. However MemVerge anticipates that HCI will be the usual deployment option.

It’s important to note that Optane DIMMs can be accessed in three ways:

  1. Volatile Memory Mode with up to 3TB/socket this year and 6TB/socket in 2020. (See note 1  below.) In this mode the Optane is twinned with DRAM and data in the two is not treated as persistent but volatile. (See note 2.)
  2. Block storage mode in which it is accessed, but not at byte-level, via storage IOs and has a lower latency that Optane SSDs.
  3. App Direct Mode in which it is byte-accessible using memory semantics by applications using the correct code. This is the fastest access persistent memory mode.

MemVerge’s DMO software uses App Direct Mode but the application software up the stack does not have to change.

Product delivery

The product can be delivered as software-only or inside an off-the-shelf, hyperconverged server appliance. The latter is based on 2-socket Cascade Lake AP server nodes with up to 512 GB DRAM and 6TB of Optane memory and 360TB of physical storage capacity, provided by 24 x QLC (or TLC) SSDS. There can be 128 such appliances in a cluster, linked with two Mellanox 100GbitE cards/node. The full cluster can have up to 768TB of Optane memory and 50PB of storage.

The nodes are interlinked with RoCE but they can also be connected via UDP or DPDK (a UDP alternative). Remote memory mode is not supported for the latter options.

DRAM can be used as memory or as an Optane cache. A client library provides HDFS-compatible access. The roadmap includes adding NFS and S3 access.

MemVerge appliance.

The hyperconverged application software can run in the MemVerge HCI appliance cluster and supports Docker containers and Kubernetes for app deployment. Fan envisages an App Store. and likens this to an iPhone for the data centre.

The software version can run in the public cloud.

Speed features

MemVerge says its MCI technology “offers 10X more memory size and 10X data I/O speed compared with current state-of-the-art compute and storage solutions in the market.”

According to Fan, the 10X memory size idea is based on a typical server today having 512GB DRAM. The 10X data I/O speed claim comes from Optane DIMMs having 100 – 250ns latency while the fastest NVME drive (Optane SSD) has a 10 micro seconds latency.

He says 3D XPoint’s theoretical endurance is in the range of 1 million to 100 million writes (ten to the 6 to ten to the 8), with Intel’s Optane being towards the lower end of that. It is dramatically better than NAND flash with its 100 to 10,000 write cycle range (ten to the 2 and ten to the 4) – this embraces SLC, MLC and TLC flash.

MemVerge background

MemVerge was founded in 2017 by VMware’s former head of storage, CEO Charles Fan, whose team developed VSAN, VMware’s HCI product; chairman Shuki Bruck,  the co-founder of XtremIO, and CTO Yue Li. There are some 25 employees.

Fan and Bruck were also involved in founding file virtualization business Rainfinity, later bought by EMC.

The company has announced a $24.5 million Series A funding round from Gaorong Capital, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners and Northern Light Venture Capital.

The MemVerge Beta program is available in June 2019. Visit www.memverge.com to sign up.

Note 1. The jump from 3TB/socket this year to 6TB/socket next year implies either that Optane DIMM density will double or that more DIMMs will be added to a socket.

Note 2. See Jim Handy’s great article explaining this.

Google endorses Elastifile Cloud File Service for GCP

Google is endorsing Elastifile’s Cloud File Service service for its cloud, despite having its own Cloud Filestore service.

Cloud Filestore supports NFS v3 and comes in standard (up to 180MB/sec and 5,000 IOPS) ) and premium (700MB/sec and 30,000 IOPS) editions.

Elastifile’s CFS has far wider file protocol support; NFS v3/4, SMB, AWS S3 and the Hadoop File System. It provides varying levels of service with differing cost and performance, and integrates tiering between file and object storage. Pricing for provisioned capacity starts at 10¢/GB/month.

CFS can scale out to support 1,000s of nodes and PBs of capacity, according to Elastifile, and deliver millions of IOPS. It features automated snapshots and asynchronous replication.

Elastifile CFS graphic.

A big appeal of Elastifile is that it runs both on-premises and in the cloud and so can function as a cloud on-ramp for Google. The two say It helps bridge the gap between traditional and cloud-native workflows, making cloud integration easier than ever before.

Dominic Preuss, Google’s director of product management, is cited as the co-writer of an Elastifile blog, along with Elastifile CEO Erwan Menard, and he says: “We’ve been working closely with Elastifile on this effort to bring you scale-out file services that complement our Cloud Filestore offering and help you meet high-performance storage needs.”

Preuss says there is a deep engineering partnership with Elastifile.

CFS is said to be well-suited for horizontal use cases such as persistent storage for Kubernetes, data resilience for pre-emptible cloud VMs and scalable Network Attached Storage (NAS) for cloud-native services. An example given is CFS used to run SAP on the Google cloud with NetWeaver and HANA workflows.

The intent to add CFS to the Google Cloud Platform was announced in December. Now the Elastifile Cloud File Service is actually available in the Google Cloud Marketplace.

Pure Storage buys Compuverde (IBM’s Spectrum NAS software supplier)

Pure Storage has bought NAS software supplier Compuverde, best known for making the software behind IBM’s Spectrum NAS product.

Compuverde was founded in 2008 by Stefan Bernbo, CEO, system architect Christian Melander and Roger Persson who is named in several software patent filings. The funding history is unknown though chairman Michael Blomqvist has been described as its financier.

The Compuverde team joins Pure Storage, which also gets to work with Compuverde’s partners.

The Swedish-based company sells vNAS and hyperconverged storage products, with vNAS scalable to more than 1000 nodes and capacity ranging from a 1TB to several EBs. The hyperconverged system is suitable for VMware vSphere, HyperV, XEN and KVM hypervisors.

At the time of the IBM deal in 2018 Compuverde software scaled out to 100s of nodes and featured self-healing capability, and erasure coding with data striped across nodes and locations, as well as disks. It supported NFS v3, v4, v4.1, as well as SMB v1. v2. v3, Amazon S3 and OpenStack Swift.

Compuverde vNAS features in 2018

A virtual IP mechanism ensured all nodes in a cluster appear available at all times, even when a particular node is taken down for upgrade or has failed, and the software supports intelligent locking and snapshots.

IBM Spectrum NAS was targeted at home directories, general and virtual machine file serving, and to provide NAS storage for Microsoft applications.

Pure flying into filers

Pure says the acquisition will expand file capabilities. Charles Giancarlo, Pure’s Chairman and CEO, stated: “We’re excited about the opportunities that the Compuverde team and technology bring to Pure’s existing portfolio. As IT strategies evolve, enterprises look to leverage the innovations of the public cloud in conjunction with their on-prem solutions.”

Blocks & Files expects Pure to use Compuverde software in the public cloud as well as offer all-flash filers. Watch out,Elastifile, Isilon, NetApp, Panasas, Qumulo and WekaIO.

Acquisition details are not revealed and the deal is expected to close this month.

NetApp adds MAX Data supports for new Intel Xeons and Optane DIMMs

NetApp’s updated v1.3 MAX Data supports new Intel Xeon CPUs and Optane DC Persistent Memory in the host servers to which it feeds data from ONTAP arrays. 

Optane DC Persistent Memory is 3D XPoint media installed on DIMMs. It is also known as storage-class memory, and MAX Data combines it with the host server’s DRAM to provide a unified data store. NetApp says it helps customers use their data without having to redesign their critical applications.

NetApp MAX Data scheme.

NetApp said last year that MAX Data supported Optane DIMMs, but they were not available when that was stated. Now they are becoming available, along with new Xeon processors.

Jennifer Huffstetler, VP and GM of Intel’s Data Center Product Management and Storage, said in a canned quote: “With the second generation of Intel Xeon Scalable processors and Intel Optane DC persistent memory, customers can discover the value of their stored data. By working together with innovative companies such as NetApp, we can move, store and process more data.”

Optane DC Persistent Memory

Up until now Intel has talked about its Cascade Lake Xeon CPUs with the 2-socket AP version having explicit Optane DIMM support and the standard version thought to have the same support, albeit without a unified memory controller. Now Intel is using the term ‘second generation’ to describe its Optane DIMM-supporting Xeon CPUs.

NetApp says MAX Data is the first enterprise storage solution in the market that uses Intel Optane DC persistent memory in its servers for storing persistent data. Blocks & Files thinks it won’t be the last.

Note: See NetApp press release here.

AMD gets Western Digital Memory Extension tech for EPYC Optane battle

AMD has announced it’s using Western Digital’s Memory Extension technology to provide servers with greater effective memory capacity and so take on Optane DRAM extension technology.

The Ultrastar DC ME200 Memory Extension Drive is a NAND device available in 1TiB, 2TiB and 4TiB capacities. Its use requires no modifications to the host server’s operating system, system hardware, firmware or application stacks. The ME200 has an NVMe/PCIe interface and comes in U.2 and AIC (add-in-card) HH-HL form factors.

The ME200 is a tweaked version of WD’s Ultrastar SN200 SSD, built with planar (single layer) 15nm MLC (2bits/cell) NAND. It uses vSMP software from ScaleMP to provide replacement memory management unit (MMU) functionality and virtualises the drive to form a virtual memory pool along with the host system’s DRAM.

ScaleMP’s vSMP ServerONE software can, it says: “aggregate up to 128 EPYC-based servers into a single virtual machine. This translates to 256 EPYC processors, up to 16,384 CPUs, and with up to 512 TB of main memory.”

Western Digital says the ME200 drive improves the EPYC-based server memory-to-core ratio compared to conventional scale-out DRAM compute clusters using only DIMMs. The drive also enables lower TCO of in-memory infrastructure through consolidation. 

An example; a 30-node cluster, holding 30TiB of data in memory, can be reduced to a 8-node/32TiB one with 4TiB system memory each, and with increased per-node CPU utilisation.

A 1U server can support up to 24TiB of system memory using the Ultrastar memory drive for in-memory compute clusters.

WD suggests its Ultrastar memory drive is good for in-memory database engines like SAP HANA, Oracle, IBM, and Microsoft, and scale-out memory-centric architectures like Redis, Memcached, Apache Spark and large-scale databases. 

Regarding Optane

Optane persistent is storage-class memory use is restricted to Intel server processors, notably the Cascade Lake CPUs, now known as Gen 2 Xeon SPs.

The likelihood of Intel extending Optane support to AMD processors is as likely as the Moon reversing its orbital direction. Hence AMD’s working with Western Digital and ScaleMP.

Compared to the use of Optane DIMMs to expand effective memory, the ME200 costs less money, is probably simpler to implement and is available for AMD EPYC as well as X86 processors. Optane-enhanced memory servers may well go faster though. 

Back in 2016 ScaleMP said its software can pool Optane SSDs and DRAM as well as NAND SSDs and DRAM. We don’t hear so much about this now.

Get an ME200 datasheet here. Get a ScaleMP vSMP white paper here.

Samsung’s Z-NAND is okay Optane competitor

With just three times longer latency Samsung’s 983 ZET Z-NAND SSD is a fair spec sheet competitor to Optane SSDs, beating it on most IO measures.

The 983 ZET uses 48-layer 3D NAND organised in SLC (1 bit/cell) mode and Samsung says it has optimised it for higher performance. It comes in 480GB and 960GB capacities.

Product briefs for the 983 ZET (Z-NAND Enterprise Technology) SSD show less than 0.03ms latency. That’s <30 µs, compared to less than 10 µs latency for the Optane DC P4800X. They show it as faster than Optane at all IOs except random writes but suffering on the endurance front.

Samsung 983 ZET SSD with heat sink removed

The ‘up to’ performance numbers (with Optane DC P4800X numbers in brackets) are;

  • Random read/write IOPS – 750,000/75,000 (550,000/500,000)
  • Sequential read/write – 3.4/3.0 GB/sec (2.4/2.0 GB/sec)

Optane’s random read/write numbers are similar whereas the 983 ZET drive is ten times slower at random reads than writes. At random reads and all sequential IO the 983 ZET is faster than Intel’s Optane SSD.

The 480GB 983 lasts for 8.5 DWPD (7.44PB written) for 5 years, with the 960GB version enduring 10 DWPD (17.52 PB written.) But Intel’s P4800X can sustain up to 60 DWPD; far more.

Samsung’s 983 supports AES 256-bit encryption, has capacitor-backed power loss protection for its DRAM cache, and comes as a HHHL add-in card (AIC.) 

Both drives use the NVMe interface operating across PCIe gen 3 x 4 lane and have 5 year warranties.

A 480GB version will cost $999.99 with $1,999.99 needed for the 960GB version. A 750GB DC P4800X Optane costs $3,223.42 on Span.com making the 983ZET substantially less expensive.

Blocks & Files wonders if Samsung is considering producing a Z-NAND NVDIMM.

MLC DRAM makes fully autonomous vehicle technology achievable

Dateline: April 1, 2019. Researchers at a secret Google facility have implemented multi-level cell DRAM, demonstrating 2 bits/cell (MLC) with a roadmap to 3 and 4 bits/cell, equivalent to MLC, TLC and QLC NAND.

MLC DRAM has double the density of single bit DRAM, enabling a doubling of DIMM density to 512GB from the current 256GB. There is an increase in access latency, from DDR4 DRAM’s 15ns to 45ns but no decrease in endurance.

As soon as an application with a working set in excess of 3TB can fit in memory then the longer MLC DRAM access time becomes moot as IOs to storage are avoided, saving much more time than that needed for extended MLC DRAM latency.

A single Optane SSD access can take 10,000ns, 220 times longer than MLC DRAM access. An NAND SSD accessed with NVMe can take 30,000ns per access, 660 times longer than MLC DRAM. A 6TB working set application running in a 3TB DRAM system and needing 20,000 storage IOs executed in 300,000,000ns. When run in a 6TB MLC DRAM system with the same Xeon SP CPU it took just 50,000ns; 6,000 times faster.

The impact of this on machine learning model training and Big Data analytics runs will be incalculable, according to Alibaba Rahamabahini, AI Scientist Emeritus at Google’s Area 51 Moonshot facility; “With this kind of moonshot thinking around AI and machine learning, I believe we can improve millions of lives. It’s impossible to understate the positive effects of this game-changing development on science and technology, with the benefit of improving human life. This outsized improvement has inspired me, and I’m tremendously excited about what’s coming, such as actually delivering fully autonomous vehicles and real time cancer diagnoses.”

The technology borrows word and bit line technology from NAND without affecting the byte addressability of DRAM.

Four different voltage levels are needed to enable 2 bits per cell. Although there is more complex sensing and refresh circuitry, the fabrication process requires no extra steps so the DRAM foundries see a doubling of density with no manufacturing cost increase. DRAM controllers need extra firmware and a few extra fabrication steps. System builders should expect to see an at least effective 50 per cut in their DRAM prices, possibly more.

Arvinograda Mylevsky, lead researcher at Google’s Area 51 Moonshot facility, said: “For far too long DRAM expense has limited application working set sizes and blighted the industry with its need to use external storage. MLC DRAM, combined with Persistent Memory, will relegate SSD and disk drive storage to archive store status, freeing modern AI, machine learning and analytics from the shackles of out-dated, decades-old technology.”

Google is contributing the technology to the Open Compute project, and making it available on an open source basis to all DRAM semi-conductor foundry operators. 

A potted history of all-flash arrays

Fourteen years ago Violin Memory began life as an all-flash array vendor, aiming to kick slower-performing disk drive arrays up their laggardly disk latency butt. Since then 18 startups have come, been acquired, survived or gone in a Game of Flash Thrones saga.

Only one startup – Pure Storage – has achieved IPO take-off speed and it is still in a debt-fuelled and loss-making growth phase. Two other original pioneers have survived but the rest are all history. A terrific blog by Flashdba suggested it was time to tell this story.

Blocks & Files has looked at this turbulent near decade and a half and sees five waves of all-flash array (AFA) innovation.

We’re looking strictly at SSD arrays, not SSDs or add-in cards, and that excludes Fusion IO, STEC, SanDisk – mostly – and all its acquisitions, and others like Virident.

A schematic chart sets the scene. It is generally time-based, with the past on the left and present time on the right hand side. Coloured lines show what’s happened to the suppliers, with joins representing acquisitions. Stars show firms crashing or product lines being cancelled. Study it for a moment and then we’ll dive into the first wave of AFA happenings.

First Wave

The first group of AFA startups comprised DSSD, Kaminario, Pure Storage, Skyera, SolidFire, Violin Memory, Whiptail, X-IO and XtremIO. Pure and XtremIO achieved major success and XtremIO post-acquisition by EMC, became the biggest-selling AFA of its era, achieving $3bn in revenues after three years availability.

XtremIO bricks

EMC was convinced of AFA goodness and spent $1bn buying DSSD, an early NVMe-oF array tech – but it bought a dud. After Dell bought EMC it canned the product in March 2017. This was possibly the biggest write-off in AFA history.

Pure Storage grew strongly, IPOed and has now joined the incumbents, boasting a $1.6bn annual revenue run rate.

Kaminario survives and is growing. Violin has survived a Chapter 11 bankruptcy and is recovering from walking wounded status.

Texas Memory Systems was bought by IBM and its tech survives as IBM’s FlashSystem arrays. Skyera stumbled and was scooped up by Western Digital in 2014.

SanDisk had a short life as an AFA vendor with its 2014-era IntelliFlash big data array, before it was bought by Western Digital in 2015 for an eye-watering $19bn. That was the price WD was willing to pay to get into the enterprise and consumer flash drive business.

SolidFire was bought by NetApp for $870m in December 2015.

SolidFire array

Whiptail was bought by Cisco in September 2013 for $415m. It found it had bought an array tech that needed lots of development work. In the end itcanned the Invicta product in June 2015.

Second wave – hybrid starts go all-flash

The next round of AFA development came from Nimble, Tegile and VM-focused Tintri. These three prominent hybrid array startups quickly went all-flash and formed a second AFA wave;.

All have been acquired. HPE bought Nimble with its pioneering InfoSight cloud management facility for its customers arrays. Nearly every other array supplier has followed Nimble’s lead and HPE is extending the tech to 3PAR arrays and into the data centre generally.

Poor Tintri crashed, entered Chapter 11 and its assets were bought for $60m by HPC storage supplier DDN in September last year. Tintri gives it a route into the mainstream enterprise array business.

X-IO was another hybrid startup that went all-flash. It stumbled, went through multiple CEOs and then, under Bill Miller, sold off its ISE line to Violin. It continues as Axellio, a maker of all-flash IOT edge boxes.

Incumbents retrofit and acquire

The seven mainstream incumbent suppliers all bought startups or /and retrofitted their own arrays with AFA tech and in two cases tried to develop their own AFA technology. One, NetApp’s FlashRay, was killed off on the verge of launch in favour of AFA retrofitted ONTAP.

The other, HDS’s in-house tech, survives but is not a significant player. In other words, no incumbent developed an AFA tech from the start that became a great product.

Dell EMC retrofitted flash to VMAX and VNX arrays on the EMC side of the house, and SC arrays on the Dell side. IBM flashified its DS8000 and Storwize arrays. HPE put a flash transplant into its 3PAR product line.

And Cisco? Cisco gave up after killing Invicta.

Invicta appliance

Interfaces

Initially, SSDs were given SATA and SAS interfaces. Then much faster multi-queue NVMe interfaces were used with direct access to a server or drive array controller’s PCIe bus, instead of indirect access through a SATA or SAS adapter.

This process is ongoing and SATA is on the way out as an SSD interface. NAND tech avoided the planar (single layer) development trap looming from every smaller cells becoming unstable, by reverting to large process sizes and layering decks of flash one above the other in 3D NAND.

It started with 16, then 32, 48, 64, and is now moving to 96-layers with 128 coming. At roughly the same planar-to-3D NAND transition time, single-bit cells gave way to double capacity MLC (2bits/cell) flash, then TLC (3bits/cell) and now we are seeing QLC (4 bits/cell) coming.

The net:net is that SSD capacities rose and rose to equal disk drive capacities – 10 and 12 and 14TB – and then surpass them with 16TB drives.

This process accelerated the cannibalisation of disk drive arrays by flash arrays. All the incumbents are busy helping their customers replace old disk drive arrays with newer AFA products. It’s a gold mine for them.

Third wave of NVME-oF inspired startups

We have also seen the rise of remote NVMe access, extending the NVMe protocol across networking links such as Ethernet and InfiniBand initially and lCP/IP and Fibre Channel latterly, to speed up array data access.

This technology prompted a third wave of AFA startups: Apeiron, E8, Excelero, Mangstor and Pavilion Data. Interestingly, DSSD was a pioneer of NVMeoF access but, among other things, was too early with its technology.

Late arrival Vast Data has seasoned its NVMe-oF tech with QLC flash and Optane storage-class memory, giving it a one array-fits-most-use-cases- product to sell.

Mangstor crashed and fizzled out, becoming EXTEN Technologies, but the others are pushing ahead, trying to grow their businesses before the incumbents adopt the same technology and crowd them out.

However, the incumbents, having learnt the expensive lesson of buying in AFA tech, are adopting NVME-oF en masse.

The upshot is that 15 companies are pushing NVME-oF arrays at the market.

The storage-class memory era arrives

Storage-class memory (SCM), also called persistent memory, as exemplified by Intel’s Optane memory products using 3D XPoint non-volatile media, promises to greatly increase data access speed. Nearly all the vendors have adoption programs. For instance:

  • HPE has added Optane to 3PAR array controllers.
  • Dell EMC is adding Optane to its VMAX and mid-range array line.
  • NetApp is feeding Optane caches in servers from its arrays.
Optane SSD

The third wave of startups need to adopt SCM fast or face the prospect of getting frozen out of the NVMe-oF array market they were specifically set up to develop.

Fast-reacting incumbents are moving so quickly that large sections of the SCM-influenced array market, the incumbent customer bases, will be closed off to the third wave startups and that will result in supplier consolidation.

It has always been that way with tech innovation and business. Succeed and you win big. Fail and your fall can be long and miserable. But we salute the pioneers- the healthy like Pure and Kaminario, and the ones with arrows in their back – DSSD, Mangstor, Tintri, Violin, Whiptail, X-IO.

You folks helped blaze a trail that revolutionised storage arrays for the better. and there is still a ways to go. How great is that.