Home Blog Page 400

Buy, buy or bye-bye? MapR misses deadline to lifeline

MapR, the struggling Hadoop data analytics firm, has missed its July 3 deadline to sell out or shut up shop. Today is Independence Day, July 4, so an update is unlikely until tomorrow at the earliest.

On May 31, MapR revealed in a WARN notice in California that it was two weeks away from closure. In the WARN letter MapR CEO John Schroeder wrote the board was considering two letters of intent to buy the company. But extremely poor – and unexpected – results in the first three months of the year had torpedoed negotiations to secure debt financing.

The June 14 deadline was extended to July 3 according to a Datanami report on June 18, which revealed MapR had signed a letter of intent to sell the company. The unknown potential acquirer was performing due diligence to see if it could consummate the acquisition.

If this process didn’t complete successfully MapR anticipated it would start layoffs and close down from July 3 onwards. That date has now passed with no announcement of an acquisition or a third deadline. The company has about 120 employees based in California.

MapR’s total funding is $280m from six investing firms, including Google’s capitalG, Qualcomm Ventures and LIghtspeed.

Cisco, NetApp crank up FlexPod with MAX Data

Cisco and NetApp are adding MAX Data to their FlexPod reference design to make applications run faster.

In a white paper published last week Cisco showed MAX Data on FlexPod is capable of delivering five times more I/O operations with 25 times less latency than the same system without MAX Data installed.

FlexPod is a converged infrastructure (CI) reference platform for compute, storage and networking. It incorporates Cisco validated designs for Cisco UCS servers and Nexus and MDS switches with NetApp’s validated architecture for all-flash and hybrid flash/disk storage arrays running ONTAP software.

MAX Data is NetApp’s Memory Accelerated Data software which uses Optane DIMM caching in servers backed by an all-flash NetApp array. MAX Data presents a POSIX file system interface to applications, which don’t need to change. The software tiers data from the ONTAP all-flash array (treated as an underlying block device) into the Optane persistent memory.

The hottest, most frequently accessed data is kept in the Optane memory space, and cooler, less frequently used, data is tiered to the ONTAP storage array. Most data requests are serviced from Optane with misses serviced from the array.

The array is connected to the Optane DIMMs by an NVMe-oF link. Applications in the server get their data IO requests serviced from the Optane cache/tier instead of the remote NetApp array.

This greatly reduces data access latency times, down to about 10 microseconds from a millisecond or so. Databases can support more transactions with fewer computing resources and complete user queries more quickly.

Design of the times

The FlexPod MAX Data design uses third-generation Cisco UCS 6332-16UP Fabric Interconnects and the UCS virtual interface card with 40Gbit/s Ethernet links to a NetApp AFF A300 storage cluster using vPCs (Virtual Port Channels.)

It supports 16-Gbit/s Fibre Channel, for Fibre Channel Protocol (FCP) between the storage cluster and UCS servers.

NetApp Cisco topology diagram of FlexPod MAX Data system. (vPC is a Virtual Port Channel.)

Optane DIMMS with 128GB, 256GB and 512GB can be used.

The Optane DIMMs must be paired with DRAM DIMMs in each memory channel and can be used in various access modes:

  • Memory mode, which provides additional volatile memory capacity.
  • App Direct mode, which provides persistence and can be accessed through traditional memory load and store operations.
  • Mixed mode where a percentage of the DIMM is used in memory mode and the rest in App Direct mode.

MAX Data uses App Direct mode. For application vendors themselves to use this Optane mode they must rewrite their applications. With MAX Data, applications can use Optane DIMMs without any re-writing of code.

StorCentric scoops up ailing startup Vexata

Enterprise NVMe array startup Vexata has been bought by StorCentric, a private-equity-owned company which yesterday acquired the Retrospect backup software company.

Vexata was founded in 2013 by CEO Zahid Hussain and CTO Surya Varanasi. They developed a high-performance all-flash enterprise array, the VX-100 and launched it in October 2017. It used NAND or Optane drives hooked up to host servers across NVMe-oF links. This posted good SPC-2 benchmark results in September, 2018, and that was followed by a Fujitsu reselling deal in North America.

Vexata VX100 array.

By this time total funding was $54m, raised in four funding rounds, and included a $5m top-up in 2017.

In March this year company issues caused a couple of strategy changes.

First Vexata separated out its software, calling it the VX-Cloud Data Acceleration Platform, which ran on FPGA-assisted commodity servers accessing a back-end X86 storage server full on NVME SSDs. Vexata worked with Fujitsu and Supermicro to develop reference architecture systems using FPGA-accelerated servers as the controller base.

Secondly it abandoned direct sales in favour of channel reselling and laid off some staff.

Execs started leaving and now the firm has been sold to StorCentric., almost certainly at a big loss for the investors. The company certainly had a highly charged twelve months.

StorCentric

Vexata will now run as a fully-owned StorCentric subsidiary.

StorCentric is a private equity investment vehicle, run by Mihir Shah, an ex-CEO of Drobo, set up to buy the Drobo personal/prosumer/SMB storage systems and Nexsan SAN and archive arrays from the remnants of Imation in a series of deals.

Mihir Shah.

In 2018 we wrote: “Nexsan arrays will need NVMe drive and NVMe-oF tech to survive in the tier 1 enterprise array space.” Whatever we were smoking it was good stuff because StorCentric has done just that by buying Vexata.

There are now four product businesses inside StorCentric: Drobo, Nexsan, as of yesterday Retrospect, and as of today, Vexata.

Mihir Shah, StorCentric CEO, offered a prepared quote: “Our Nexsan enterprise customers are evaluating NVMe solutions to address evolving requirements for their most demanding applications. The Vexata products will provide a portfolio of highly scalable and performant data storage solutions with the operational simplicity of StorCentric’s portfolio of products.”

Zahid Hussain is not joining StorCentric, so Varanasi provided the Vexata canned quote. “The Vexata team is excited to be joining the StorCentric group of companies, and we see the move as an ideal fit for both organizations…Vexata will be able to leverage the strong Nexsan channel community of over 1,000 partners.” Customers get StorCentric support services, with its 59-person headcount.

Gaining synergies

If StorCentric runs each company as a separate business, how is to realisesynergies between these companies? When it bought Drobo and Nexsan in September, Shah spoke of the company’s ambition to develop a hub and spoke architecture, with Nexsan as hub and Drobo as a spoke.

We can see how Retrospect technology could be applied to the Nexsan hub, and also that the Nexsan hub could get an injection of Vexata technology.

Bur for this to happen the engineering teams need to be aligned and, possibly, combined, so the four separate businesses could evolve into one with four business units sitting atop a unified engineering team.

Comment

StorCentric has acquired a combined total of one million customers. This is not bad for a company that didn’t exist a year ago and which has basically bought near-distressed or unfilled potential assets from near-failing owners to give them an energising operational transplant.

We think that the Nexsan channel will need upskilling to sell into the NVMe array marketplace, with hot competition between well-established mainstream vendors (Dell EMC, Hitachi Vantara,  HPE, IBM, NetApp, Pure Storage) and energetic startups (Apeiron, E8, Excelero, Kaminario and Pavilion) plus Western Digital with its IntelliFlash Tegile technology.

Bring it on. There are opportunities for Nexsan arrays to use Vexata technology and for Vexata customers to be exposed to Retrospect backup. And maybe StorCentric has more acquisitions in mind. Bring that on too.

Q. Does Optane DIMM access have to be so complicated? A. Yes

Intel’s Optane DIMMs can be accessed in five different ways, each of which have their advantages and disadvantages. We paint a broad brush picture here of the five to show what’s involved and how the modes differ.

Th five modes makes the decision of whether or not to use Optane DIMMs in a server far more complex than a binary yes:no choice.

Optane can also be implemented as a faster-than-NAND SSD, so that gives ix Optane Access methods in total. This article is largely based on four articles by Jim Handy of Objective Analysis, which start with an overview. For more detailed information study the four Handy blog posts.

But first, an Optane recap.

Optane

Optane is Intel’s brand name for devices built using 3D XPoint media. This is an implementation of Phase-Change Memory in which an electrical current is used to change the state of a Chalcogenide glass material from crystalline to amorphous and back again. The two states have different resistance levels and these are used to signal binary ones and zeroes. Each state is persistent or non-volatile.

This is why Optane is called persistent or storage-class memory (SCM).

The XPoint media is fabricated in cells which are laid out in a 2-layer crosspoint array. Access time is faster than NAND flash but slower than DRAM, with writes taking about three times longer than reads.

Optane can be implemented as a memory bus-connected DIMM or as an NVMe-connected SSD.

When connected as an Optane DIMM only certain Intel 2nd-Generation Intel Xeon Scalable processor can support it; ones using Intel’s proprietary DDR-T protocol. This complies with standard DDR4 but adds Optane DIMM management commands.

Speed limits

An Optane DIMM read can take 350 nanoseconds on average with the write latency averaging 1,050ns. For comparison and using generic example numbers, we present this list:

  • DDR4 memory accesses – 14ns
  • Optane DIMM – 350ns
  • NVMe Optane SSD access can take 10,000ns (10μs)
  • NVMe NAND SSD write – 30,000ns (30μs)
  • NVMe NAND SSD read – 120,000ns (120μs)
  • SATA NAND SSD read – 500,000ns (500μs or 0.5ms)
  • SATA NAND SSD write – 3,000,000ns (3,000μs or 3ms)
  • Disk drive seek – 100,000,000ns (100,000μs or 100ms) 

The access times for DIMMs can involve operating System (OS) IO software stacks while those for SSDs will have added latency from the interconnect type and device controller.

There are different ways to access an Optane DIMM and each mode has its own speed. Intel has not published the numbers but the relative speeds can be inferred from the access mode characteristics.

We’ll look at the different access modes and try to position them against one another. 

A complicating factor is that different parts of an Optane DIMM address space can be set apart and accessed in different modes. We’ll save that discussion for another article.

Optane SSD Access

The Optane SSD is accessed in the same way as any other NVMe SSD but returns read data or accepts write data faster than NVMe-connected NAND SSDs. There’s no need to say anything else and we can move straight on to the DIMMs.

Optane DIMM Access

An Optane DIMM can be accessed in five different ways, starting with either Memory Mode or App Direct Mode, also known as DAX for Direct Access Mode. Memory Mode is block-addressable whiles DAX is byte-addressable.

DAX has three options; Raw Device Access, via a File API, or Memory Access. The File API method has two sub-options; via a File System or via a Non-volatile Memory-aware File System (NVM-aware).

Memory Mode

In Memory Mode the Optane DIMM is paired with a DRAM cache and the host system gets to indirectly use the Optane DIMM’ s capacity as memory, with the front-end cache providing DRAM speed for both reads and writes. Since Optane writes are 3x slower than Optane reads this is important in keeping system speed high. 

Optane DIMM capacity is cheaper than DRAM DIMM capacity so Ithis is an effective way to increase the effective memory capacity of a server.

However, DRAM cache contents are lost if system power is lost. The host OIS does not know what cache contents were written to the Optane DIMM from the DRAM cache, and so the entirety of the Optane DIMM’s data contents are viewed as unreliable.

So, althought Optane is persistent memory, Memory Mode writes are not persistent, while all the DAX modes are persistent.

The DIMM application access modes, apart from Intel’s Memory Mode, are positioned in an SNIA Persistent Memory programming model diagram.

The other, application direct, modes involve application code being re-written to use Optane DIMMs.

App Direct Mode – Raw Device Access

In this mode the application program reads and writes directly to the Optane DIMM driver in the host OS, which, in turn, accesses the Optane DIMM. This is faster than going through a file system interface. It is not as direct, or therefore as fast, as Memory Access in which application reads and writes go straight to the DIMM without any interruption.

As we understand it, in Raw Device Mode, the Optane DIMM address space is arranged into blocks of 512b or 4KB in size. Reads and writes are at the block level and pass through an NVDIMM driver. This mode works with current file systems.

App Direct Mode – File System

In this mode the Optane DIMM address space is accessed by an application issuing file IO calls using a filesystem API. These are dealt with by the filesystem code which then talks to the NVDIMM driver and, via that, to the Optane DIMM. It takes time for the file system to do its work and thus this kind of access is slower than both raw device access and memory access.

App Direct Mode – NVM-aware File System

This modifies the file system access to involve an NVM-aware file system; pmem-aware in the SNIA diagram above. According to Handy, such NVM-aware file systems are designed to run faster than a traditional file system.

There is no benchmark data available to demonstrate this.

App Direct Mode – Memory Access

With this mode an application uses memory semantics; load and store instructions, to directly access the Optane DIMM’s address space with no intervening entities getting in the way. This is the fastest possible way an application can use Optane.

A Handy table

Handy suns up the access methods in a table, which we have reformatted slightly;

Concerning the “Backward compatibility” row – Memory Mode operates with all legacy software, as do all of the App Direct Mode access types except direct memory access.

Developers can access an Intel Persistent Memory Developers’ Kit to dig deeper into Optane DIMM access.

My head hurts

Does Optane DIMM use have to be so complicated? The short answer is yes, because storage can be accessed at block level and at file level. As Optane combines both memory and storage attributes it offers both memory-level and storage-level accesses.

Other forms of storage-class memory will, we understand, offer the same broad group of access modes, unless their supplier artificially limits then in some way.

Hopefully, any supplier offering SCM apart from Intel does so using open interfaces or, at least, provides interfaces for both AMD and ARM. Micron, for example, could enable QuantX 3D XPoint DIMMs to be usable on both AMD and Intel X86 processors as well as ARM CPUs.

That should prompt an immediate Intel Optane DIMM price cut, and also widen the SCM developer ecosystem and make more SCM-capable software available.

Stung by Trump trade ban, China doubles down on DRAM market

China’s Tsinghua Unigroup today said it is entering the DRAM business – and Foxconn looks set to follow suit, according to analysts.

This is against a backdrop of continuing US-China trade tensions, including restrictions imposed by the US Department of Commerce on Huawei that bar US firms trading with the Chinese tech giant.

According to Nikkei Asian Review, China aims to produce “70% of its own chips by 2025. The current figure, thought to be between 10% to 30%, leaves Chinese companies reliant on foreign chipmakers”.

So how are they doing so far?

Yangtze Memory Technology, a Tsinghua subsidiary, is developing 3D NAND with 128-layers in mind.

And the market research firm DRAMeXChange today noted China’s Innotron Memory attended the GSA Memory+Conference in May and is expected to mass produce 8Gb DRAM product by this year-end. However, for self-sufficiency, China needs more than a single Innotron fab, DRAMeXChange said.

These are the bright spots in China’s bid to build national chip champions. Otherwise, the road to “Made in China in 2025” has been rocky.

All is fair in trade and war

In September 2017, in an analysis of three emerging Chinese memory and NAND makers, I wrote: “We wonder if domestic American DRAM and NAND suppliers might start international fair trade spats if these Chinese suppliers entered the US market.”

It didn’t take long. In December 2017, Micron, the US’s only big memory maker, filed suit against JHICC, a Chinese maker of DRAM for specialist consumer markets, alleging theft of trade secrets and intellectual property.

The US Government took up cudgels on behalf of Micron in September 2018, banning US exports of chipmaking equipment to JHICC. In November the U.S Department of Justice filed an indictment, alleging theft of trade secrets by JHICC, its Taiwanese partner United Microelectronics Corp, and sundry individuals.

This spooked UMC, a major contract chip maker for US vendors, which ended its co-operation with JHICC in January 2019.

Your tasty storage collection of files, flash and the cloud

The themes of this collection of storage news bytes revolve around backup, parallel file access, the public cloud and flash, meaning data at scale and fast data access.

We also have a couple of in-memory supplier announcements and Formulus Black uses its dynamic bit market tech to accelerate data analytics queries by up to 70 times. There’s more, so read on.

IBM extends Spectrum Scale

Spectrum Scale is IBM’s parallel access, scale-out file system for enterprise and high-performance computing. Spectrum Scale Erasure Code Edition provides Spectrum Scale on the customer’s choice of commodity hardware with network-dispersed Spectrum Scale RAID.

It supports four different erasure codes: 4+2P, 4+3P, 8+2P, and 8+3P in addition to 3 and 4 way replication. Each Spectrum Scale Erasure Code Edition recovery group can have 4-32 storage nodes, and there can be up to 128 storage nodes in a Spectrum Scale cluster. Documentation, here.

Panzura releases Freedom 8

Cloud storage gateway and file sync and sharer Panzura has announced its Freedom 8 multi-cloud file services platform.

It has Cloud Mirroring to mirror data to two or more clouds. It protects against cloud outages, accidental deletion of cloud buckets and doubles cloud storage availability. There is automated failover for local and global high-availability (HA) filers to keep services running without disruption.

The software is available from mid-July onwards.

StorCentric acquires Retrospect

StorCentric, the private-equity backed company consisting of Drobo and Nexsan, has added backup to the portfolio with acquisition of Retrospect.

Retrospect has a bit of history. Originally part of Dantz Corporation, formed in 1984, EMC bought Dantz, and Retrospect, its consumer/prosumer/small/medium business backup product, in 2004. In 2010, EMC sold Retrospect to Sonic Solutions. Sonic rebranded the company as Roxio Retrospect, and was promptly bought by Rovi Corporation.

Rovi spun off Retrospect as a privately-owned company in 2011 and the product continued being developed and sold.

It can backup a multitude of source systems and integrates with twenty or so cloud storage providers, including Amazon S3, Google Cloud Storage, and Dropbox. StorCentric says there are more than 500,000 Retrospect customers in 100+ countries, and Drobo customers will now be able to use it.

Retrospect will operate as a wholly-owned subsidiary of StorCentric. A company blog states: “With StorCentric’s resources, we’ll be able to push Retrospect Backup forward even further with new features and support for more platforms, and our customers and partners will continue to receive the same top-notch service from our excellent Sales and Support teams.

“We’re really excited to be part of the StorCentric family, so we can continue improving Retrospect Backup.”

Supermicro lays down NVMe rulers

Supermicro has announced all-flash BigTwin and 1U Petascale Systems configurations using the EDSFF (Enterprise and Datacenter Storage Form Factor) ruler format NVMe flash drives. These drives have, it claims, 6 times more throughput and a 7 times latency reduction over non-NVMe flash storage.

It says EDSFF E1.L drives have 16TB raw flash capacity now and will doubling capacity in the coming quarters. The E1.S format drives are 4TB raw when available, and doubling in the next iteration.

Supermicro’s EDSFF-supporting systems.

The 1U Petascale E1.L supports 32 x E1.L drives; the Petascale E1.S 32 x E1.S drives; and the Petascale JBOF supports 32 E1.L drives for storage expansion. 

The BigTwin 2U four-node system offers the highest performance (IOPS/GB) with ten E1.S drives plus two SATA M.2 per node.

These systems are available now.

Shorts

AOME Backupper has been upgraded to V5.0 with a brand new interface.

Cohesity has run a UK survey with findings showing mass data fragmentation is a barrier to successful public cloud data storage. It says removing fragmentation can have wide-ranging benefits and, of course, its software does precisely that.

Druva has announced its Compass partner program with expanded enablement resources, an accreditation curriculum, and a streamlined sales process. The accreditation process involves training to align to certifications with ecosystem providers like AWS and VMware. The Druva Compass program is immediately available to partners in North America and EMEA and will be available globally later this year. 

Elastifile, which provides scalable file storage, has announced a provider plugin for HashiCorp Terraform supporting Elastifile Cloud File Service, its managed file storage service on Google Cloud. 

EnterpriseDB, a Postgres company with 4,000 enterprise customers, has been acquired by private equity firm Great Hill Partners. Financial terms of the private transaction were not disclosed. The announcement sys EnterpriseDB has been growing over 40 per cent year over year and has experienced 37 consecutive quarters of subscription growth.

Formulus Black has partnered with data analytics company Looker to increase query performance up to 70x without database migrations or changes. The Looker software uses Formulus Black’s Forsa software to run at memory channel speeds. 

GigaSpaces, the provider of InsightEdge n-memory real-time analytics processing, has teamed up with Tableau, an analytics platform, to help enterprises accelerate customized BI visualizations on their operational data.

GridGain Systems, a provider of enterprise-grade in-memory computing offerings based on Apache Ignite, has announced new GridGain Developer Bundles, which include Support, Consulting and Training for GridGain Community or Enterprise Edition. 

IBM Spectrum Archive EE version 1.3.0.3 has been released with support for Spectrum Scale 5.0.3 and RHEL 7.6. It can allow all tape drives to be in zone to all Spectrum Archive nodes and tolerate clusters configured with separate admin networks. The eeadm migrate function allows migration of file lists composed with the find command and provided through STDIN. There is no need to run a LIST policy first. Find more information about the changes in 1.3.0.3 here

ObjectiveFS v6.3 has been released with the memory allocator being more robust in low memory situation and two new mount options: nomem and retry. The nomem mount option selects mount behavior when out of memory and theretry mount option is for retrying connection to an object store upon start up. FUSE for macOS has been updated to version 3.9.2. Release notes here

Oracle has announced the availability of its Oracle Autonomous Database Dedicated service, claimed to which provide customers with the highest levels of security, reliability, and control for any class of database workload. This provides customers with a customisable private database cloud running on dedicated Exadata Infrastructure in the Oracle Cloud. Oracle says it uses machine learning to provide self-driving, self-repairing, and self-securing capabilities.

Pavilion Data Systems’ Hyper-Parallel Flash Array is now part of the IBM Global Solutions Directory, meaning it can be used with Spectrum Scale (parallel access scale-out file system). The combination gives more access to data, with performance and scalability increases. See video here

Snowflake, the cloud data warehouse firm, has added ISO/IEC 27001:2013 certification to its list of security capabilities

According to GrowjoUS, Vast Data‘s revenue is currently $11.5M per year and it has 68 employees. Total funding is $80m and it grew its employee count by 58 per cent last year.

Data replicator WANdisco has secured a $750,000 contract or its Fusion platform with an unnamed Chines phone maker. This is the second deal in the region won by WANdisco’s direct sales channel this year.

What’s the XPoint? SK hynix preps storage-class memory competitor

SK hynix is developing storage-class memory that will compete with 3D XPoint.

The Korean flash fabber signalled its intentions with a paper presented by company researchers last December at the International Electron Devices Meeting held in San Francisco. (Read The Next Platform’s recent write-up.)

The paper has the catchy title “High-Performance, Cost-Effective 2z nm Two-Deck Crosspoint Memory Integrated by Self-Align Scheme for 128 Gb SCM,”. A public abstract and some diagrams are available.

It’s a phase we are going through

The technology involves a 2-layer (2-deck) cross point memory using phase-change materials integrated with a chalcogenide selector device. Phase change means the material changes its state, generally due to an electrical current, from crystalline to amorphous and back again, with consequent changes in its resistance.

Two differing resistance levels are used to signal binary one and zero.

The cells were fabricated using self-aligned processes. A diagram in the paper abstract shows the strong resemblance to Intel’s 3D XPoint technology:

SK hynix diagram of its Xpoint self-aligned process integration scheme: (a) cell stack material deposition; (b) after self-aligned WL (word line)patterning; (c) ILD (interlayer dielectric) deposition, CMP (Chemical Mechanical Polishing) and BL (Bitline)  deposition; and (d) self-aligned BL patterning TE, ME and BE are top, middle and base electrodes.
Intel 3D Xpoint diagram


The research device had a read latency of <100 ns in a 16 Mb test array, which compares well to Intel’s 3D XPoint 350ns read latency.

According to the SK Hynix researchers, this latency makes the technology suitable for use in a 128 Gb storage-class memory chip, made from 16 banks of cells.

SK hynix paper floor plan diagram for a 128Gb XPoint die.

Are we there yet?

We now have five storage-class memory supplier efforts:

  • Intel – Optane – 3D XPoint
  • Micron – QuantX 3D XPoint
  • Samsung – Z-NAND 
  • SK Hynix – 3D XPoint-like technology
  • Western Digital – ReRAM

Assume a minimum of two years between publishing a research paper and product appearing, SK hynix and Micron SCM products could be available as soon as 2021.

13 minute-power cut blacks out global flash chip supply

A power failure in Yokkaichi, Japan has thrown Toshiba and Western Digital’s flash supply into chaos. The temporary loss of manufacturing capacity will reduce global flash supplies by about 25 per cent between August and October and this in turn may fuel short-term price rises of 5-10 per cent.

On June 15 a 13-minute outage hit Japan’s Yokkaichi region, where Toshiba Memory Corporation (TMC), WD’s joint venture partner, produces flash chips.

The blackout affected process machinery which are still not working properly. Full production will resume by mid-July, according to a Reuters report.

TMC Yokkaichi plant

WD anticipates a reduction of flash wafer availability of approximately six exabytes, most of which will hit the August-October quarter (its first quarter of fiscal year 2020).

Gone in a flash

Aaron Rakers, a senior analyst at Well’s Fargo, informed his clients that flash wafer process/production time is more than 10 weeks. That is why so much flash chip capacity, the ~6EB, has been lost.

Rakers estimates WD shipped about 11EBs of NAND capacity in 2019’s February-April quarter (WDC’s Q3 fy2019) with 12.3EBs shipping in the 2019 May-June quarter.

Wells Fargo’s industry checks suggest “WD had communicated indications of price increases following the Yokkaichi outage – we think a +5-10 per cent price impact could be considered.”

In other words WD will ship half the flash bits it expected to ship in the Aug-Oct quarter and could lose up to half its expected flash chip revenues for the quarter, subject, of course, to any temporary price rises.

Toshiba is even worse affected. Rakers thinks there is a 40/60 bit production split between WDC and Toshiba. If correct, this means Toshiba will lose ~9EB because of the power cut.

In response to the Yokkaichi blackout, TrendForce has adjusted NAND price trends for the third 2019 quarter. The market research firm thinks the downswing in prices of 3D NAND products may moderate in the quarter, and the supply of 2D NAND, mostly used in specialist storage applications, will tighten up noticeably. Overall it thinks that NAND contract prices will trend flat or drop slightly.

Quantum touts cloud-based subscription service

Quantum is adding cloud-based device management and product access via subscription.

Called Distributed Cloud Services, the software runs as a central hub to which Quantum products connect and send log files and other telemetry data about their environment.

This is a similar scheme to HPE’s (Nimble-based) InfoSight system and Quantum is following Pure Storage and many others in providing analytics derived from centrally-stored and analysed telemetry data.

Quantum said its services team can proactively manage the customer’s Quantum products, either as an operational service or on pay-per-use basis.

Cloud-based analytics software is available now across most of Quantum’s product lines, and customers can start sending data to cloud-based analytics software today and access their content via a web-based portal at no charge.

Quantum Operational Services and Storage-as-a-Service Offerings are generally available now, with tape, DXi deduping backup target appliance and StorNext file management available through subscription.

And now we are 10. Scality RINGs the changes

At a press briefing yesterday to mark the company’s tenth birthday, Scality co-founder and CEO Jérôme Lecat talked about competition, its roadmap, and a new version of its RING object storage software.

In no particular order, let’s kick off with Scality’s competitive landscape – as the company sees it. According to Lecat Dell EMC is Scality’s premier competitor, and NetApp, StorageGRID is in second place. It encounters NetApp about half the time it meets Dell EMC.

The next two competitors are IBM Cloud Object Store and companies touting solutions based on Ceph. Collectively, Scality encounters them half the time it competes against NetApp.

Roadmap

Scality is testing all-flash RINGs with QLC (4 bits/cell) flash in mind. NAND-based RINGs would need less electricity than disk-based counterparts and  may be be more reliable in a hardware sense.

It is working with HPE to integrate RING with Infosight, HPE’s analytics and management platform. HPE has also launched a tiered AI Data Node with WekaIO software installed for high-speed data access, along with Scality RING software for longer term data storage.

Read a reference config document for the AI Data Node here.

RING8

Scality has updated its RING object storage, adding management features across multiple RINGs and public clouds, and new API support.

RING8, the eighth generation RING, has:

  • Improved security with added role-based access control and encryption support,
  • Enhanced multi-tenancy for service providers,
  • More AWS S3 API support and support for legacy NFS v4,
  • eXtended Data Management (XDM) and mobility across multiple edge and core RINGs, and public clouds, with lifecycle tiering and Zenko data orchestrator integration.

Details can be found on a datasheet downloadable here (registration needed.)

An analyst at the briefing suggested Scality is making pre-emptive moves in case Amazon produced an on-premises S3 object storage product.

Edge and Core RINGs

An edge RING site will be a smaller RING, say 200TB, with lower durability, such as a 9:3 erasure coding system. It will be used in remote office/branch office and embedded environments with large data requirements. Scality calls this a service edge. We might think of them as RINGlets.

The edge RINGs replicate their data to a central and much larger RING with higher durability, such as 7:5 erasure coding. This can withstand a higher degree of hardware component failure.

Download a RING8 datasheet here (registration needed.)

Dell-Nutanix duopoly cements grip on HCI market

Dell and Nutanix together account of nearly three quarters of hyperconverged (HCI) systems revenues. HCI revenues have surpassed converged systems and the integrated platform is in decline.

This is revealed by IDC’s Q1 2019 Worldwide Quarterly Converged Systems Tracker. The tech analyst firm organises the market into three categories;

  • Certified reference systems and integrated infrastructure – pre-integrated, vendor-certified systems containing server, storage, networking, and basic element/systems management software.
  • Integrated platforms – integrated systems sold with pre-integrated packaged software and customised system engineering; think Oracle ExaData
  • Hyperconverged systems – collapse core storage and compute functionality into a single, highly virtualized system with a scale-out architecture and all compute and storage functions coming from the same x86 server resources.

The category revenue numbers for the quarter are:

  • CRS & IS – $1.4bn (9% y-o-y)        – 36.6% of market
  • IP             – $556m (-13.3% y-o-y)   – 14.8% of market
  • HCI          – $1.8bn (46.7% y-o-y)     – 48.6% of market
  • TOTAL    – $3.75bn – growth y-o-y not revealed

In the CI category Dell said its revenue share was 55.3 per cent, comprised of Dell EMC VxBlock Systems, Ready Solutions and Ready Stack.

IDC publishes top supplier revenue numbers for the HCI market, showing branded systems. It also divvies up the revenues by software supplier.

Blocks & Files highlighting.

Dell has more than twice the revenue of second-placed Nutanix; and both dwarf HPE. The market grew 46.7 per cent, with Nutanix and HPE increasing revenues at less than that rate. But Nutanix has been moving to a subscription, software-only business model. It also supplies software to run on other vendors’ hardware. So like VMware, it gains more market share when the numbers are cut by HCI software supplier.

Blocks & Files highlighting.

VMware and Nutanix dominate the software market, with 70 per cent combined share. VMware revenues grew 36.3 per cent year on year, the market as a whole at 46.7 per cent, Nutanix and HPE and the rest-of-market category growing at less than trend. Also-rans in IDC’s Rest of Market category include Cisco, Datrium, Maxta, NetApp, Pivot3 and Scale Computing.

Wells Fargo senior analyst Aaron Rakers provided data for NetApp and Cisco:

  • Cisco’s HyperFlex revenue was ~$82m, just a shade behind HPE, and up 37 per cent – again similar to HPE’s 36.2 per cent growth rate.
  • NetApp’s Elements HCI revenue was estimated at ~$46M, up 128.4 per cent, making NetApp the fastest-growing supplier in this group.

VMware’s dominance increased over the year while Nutanix market share eased from 32.2 per cent to 28.9 per cent. Nevertheless Nutanix revenues are more than six times higher than third-placed HPE.

IDC has split out a separate HCI category, calling it Disaggregated HCI; systems designed from the ground up to only support separate compute and storage nodes. It doesn’t publicly reveal any numbers in this segment but says an example supplier is NetApp, with its Elements HCI.

Blocks & Files would add Datrium and HPE’s latest dHCI Nimble array to this category. IDC doesn’t publicly reveal the overall size of this niche, its growth rate or the supplier shares.

Qumulo adds fatter drives to flash, hybrid and archive systems

Qumulo has added larger capacity models to its all-flash, hybrid and archive systems along with a software update that includes real-time analytics.

The three new Qumulo products.

The all-flash P series gets a new P184T model positioned above the existing top-of-the-line P92T. It has 184TB of raw capacity, consisting of 24 x 7.68TB NVMe SSDs instead of the P92T’s 24 x 3.84TB drives.

P184 product added to all-flash P series

The higher-capacity QC class offers a mix of flash speed and disk capacity. The QC24, QC40 C series come in a 1U chassis while the larger QC104, 208, 260 and 360 are built with a 4U enclosure. A new C168T model uses 12 x 14TB disk drives instead of the C72T’s 12 x 6TB drives.

C168T product added to hybrid C/QC products. P series

It also has 3.8TB of flash – pretty much double the C72’s 1.92TB.

And then there were two

Qumulo’s nearline archive filers are the K series. A new K168T slots in above the existing K144T, with 168TB of capacity (the K1t-44T has 144TB(/ The increase is achieved by using 14TB disk drives instead of the K144T’s 12TB drives.

K168T nearline archive product announced.

A Qumulo software update increases write performance by up to 40 per cent on its all-flash P series.

The real-time analytics shows how much capacity is used by storage, snapshots, and metadata. It also reveals capacity usage trends and makes usage spikes visible. A security audit feature tracks which users accessed files and what they did during the access.

The Qumulo C-168T is available now, the K-168T is available July 9, and the P-184T is available on July 23 The new software features and functionality are in v2.12.4 of Qumulo’s software, which is available now.