Home Blog Page 143

HPE GreenLake taps VAST Data for file storage

HPE has added block and file storage services to its GreenLake subscription program, based on new Alletra Storage MP hardware, with the file service itself based on VAST Data software.

Alletra Storage MP is a multi-protocol system, meaning block or file, and joins the Alletra 6000 (Nimble-based) and 9000 (Primera-based) block storage arrays. HPE also provides the Alletra 5000 hybrid SSD+HDD arrays aimed at cost-efficient performance for a mix of primary workloads and secondary backup and recovery. Also in the family is the Alletra 4000 line, rebranded Apollo servers for performance-centric workloads. HPE’s announcement additionally includes backup and recovery and Zerto-based disaster recovery as GreenLake services.

Tom Black, HPE’s EVP and GM of storage, said: “The rapid increase in the volume and complexity of data has forced organizations to manage it all with a costly combination of siloed storage solutions. The new HPE GreenLake data services and expanded HPE Alletra innovations make it easier and more economical to manage multiple types of data, storage protocols, and workloads.”

The big event in this STaaS (Storage-as-a-Service) announcement is the addition of VAST Data-based file services to GreenLake and the use of Alletra MP hardware for both scale-out file and block services so that customers can converge file and block services on a single hardware platform. But the file storage and block storage are two separate software environments. It is not possible to have a unified file+block Alletra MP system.

Block storage was first added to GreenLake in June last year, as was backup and recovery, and also Zerto-based disaster recovery in HPE’s GreenLake for Private Cloud Enterprise announcement.

The existing GreenLake block storage services continue and we now have:

  • GreenLake mission-critical block storage – Alletra 9000 
  • GreenLake business-critical block storage – Alletra 6000 
  • GreenLake general purpose block storage – (Alletra 5000) 
  • GreenLake scale-out block storage – Alletra MP – mission-critical storage with mid-range economics 

Before today, GreenLake file services were supplied through a partnership deal with Qumulo. Now we have the VAST Data-powered GreenLake file services as well, running on Alletra Storage MP systems, which are also used for the scale-out block services.

File persona Alletra MP

Alletra MP, formerly known as Alletra Storage MP, for file services uses VAST Data’s disaggregated shared everything (DASE) architecture and software, with one or more 2RU controller chassis (2 CPU nodes per chassis) talking across a 100GbitE NVMe fabric to one or more 2RU capacity (JBOF) nodes. These come with up to 20 x NVMe SSDs (7.58TB, 15.35TB) and four storage-class memory drives (800GB, 1.6TB encrypted), which we understand are Kioxia XL Flash drives. They provide fast metadata operations.

Controller nodes and capacity nodes can be scaled independently, providing separate performance and capacity scaling. Aruba switches interconnect the two classes of hardware.

HPE image showing Alletra Storage MP
HPE image showing Alletra Storage MP Compute, 2 x Aruba 8325 switches and Alletra Storage MP JBOF

Shilpi Srivastava, VP Storage and Data Services Marketing at HPE, told us: “HPE’s GreenLake for File Storage uses a version of VAST Data’s software that’s built for HPE GreenLake cloud platform. While we leverage VAST Data software as a key component of our solution, the product runs on HPE’s differentiated HPE Alletra Storage MP hardware and is managed through HPE GreenLake cloud platform.

“For HPE GreenLake for File Storage, the compute controller nodes do not store metadata. The metadata is stored in SCM layer on JBOFs. That is possible with the Disaggregated Shared Everything architecture of the software that VAST provides for HPE GreenLake for File Storage.”

This software provides inline similarity data reduction (deduplication and compression) even for previously compacted data.

HPE Alletra Storage MP file mode spec table
Alletra Storage MP file mode spec table

HPE suggests GreenLake for File Storage can be applied to cloud-native workloads (Kubernetes, OpenShift, Anthos), BI and ML frameworks (TensorFlow, PyTorch, H20.ai and Caffe2) and petabyte-scale data lakes (Spark, Spark streaming, Hadoop, and Python).

Block persona Alletra MP

Block persona Alletra MP uses the same basic 2RU storage chassis, which hosts two 8 or 16-core CPU controllers [nodes] and 8, 12 or 24 small form factor TLC-encrypted NVMe drives (1.92TB, 3.84TB, 7.68TB, or 15.36TB). It features a massively parallel, multi-node, and all-active platform design with a minimum of two nodes. There is a new block mode OS.  

HPE Alletra MP chassis
Front and rear views of block mode Alletra MP chassis with/without bezel and showing 24 x 3.84TB NVMe SSDs in three groups of eight

Srivastava said: “For the new block storage offering, the software OS is an enhanced version of the OS from Alletra 9000 (Primera) that blends the data reduction tech previously in the Nimble Storage OS. The combination of the two software capabilities backed by Alletra Storage MP enables GreenLake for Block Storage to offer the availability, performance and scalability of mission-critical storage with mid-range economics.” 

HPE Alletra Storage MP feature table
Block mode Alletra Storage MP feature table. Effective capacity assumes 4:1 data compaction ratio (thin provisioning, deduplication, compression, and copy technologies) in a RAID 6 (10+2) configuration. Max raw capacity uses 15.36TB drives

HPE says GreenLake for Block Storage is the industry’s first disaggregated, scale-out block storage with a 100 percent data availability guarantee. The company claims it offers better price/performance than GreenLake mission-critical block storage but with the same always-on, always-fast architecture.

HPE’s Patrick Osbourne, SVP and GM for Cloud and Data Infrastructure Platforms, was asked about a GreenLake for Object Storage service in a briefing. He said this “is an opportunity for tomorrow.” VAST data supports S3.

Capex and opex options

Srivastava tells us: “With this launch, HPE is for the first time adding the flexibility of capex purchasing, in addition to the opex buying options to HPE GreenLake… To get the new HPE Alletra Storage MP, customers must first purchase the new HPE GreenLake for Block Storage or HPE GreenLake for File Storage. With that, they get HPE Alletra Storage MP today that they own along with HPE GreenLake for Block Storage or HPE GreenLake for File Storage subscription services. HPE GreenLake for Block Storage based on HPE Alletra Storage MP and HPE GreenLake for File Storage will be available via opex options in the near future.”

She emphasized: “It’s important to recognize that HPE GreenLake is first and foremost a cloud platform that offers a cloud operational experience for customers to orchestrate and manage all of their data services.” 

Competition

The competition for the Alletra MP is twofold. On the one hand there are file storage systems and on the other block storage systems. HPE has not released any performance data for Alletra MP in either file or block mode, contenting itself so far by saying it has low latency and high performance. Describing Alletra MP as suitable for mission-critical storage with mid-range economics suggests that it will need careful positioning against the Alletra 4000, 6000 and 9000 block storage systems if it is not going to cannibalize their sales. Having it be a scale-out block storage system will help with differentiation but performance and price/performance stats will be needed as well.

A quick list of block mode competition will include Dell (PowerStore, PowerMax), Hitachi Vantara, IBM (FlashSystem), NetApp (ONTAP), and Pure Storage, along with StorONE. File mode competition will involve Dell (PowerScale), IBM (Spectrum Scale), NetApp, and Qumulo. It may also include Weka.

Pure Storage’s Prakash Darji, GM of the Digital Experience business unit, said: “The market for scale-out block wasn’t growing or all that large,” and Pure hasn’t optimized its products for that.

He observed: “If you’re dependent on third party IP to go ahead and get changes to deliver your service, I don’t know any SaaS company on the planet that’s been successful with that strategy.” Pure introduced a unified file and block FlashArray after it bought Compuverde in 2019. It had file access protocols added with the v6.0 Purity operating system release in June 2020 after having initial NFS and SMB support arrive with v5.0 Purity in 2017. It can say it has a single block+file silo, like NetApp with ONTAP and Dell with PowerStore, but unlike HPE.

Effect on VAST

This endorsement of VAST Data by HPE is a huge win. In effect HPE is saying VAST Data is a major enterprise-class supplier of file services. HPE itself suddenly becomes a more serious filesystem storage player with VAST a first-party supplier to GreenLake, unlike Qumulo. This catapults VAST from being a new startup, albeit a fast-growing one, into a supplier in the same enterprise class as HPE, and therefore worthy to compete for enterprise, mission-critical, file-based workloads against Dell, IBM, NetApp, Pure Storage, and Qumulo.

We think VAST could see a significant upsurge in its revenue with this HPE GreenLake deal.

Bootnote

The phrase “File persona” is no relation to the File Persona offering on HPE’s 3PAR arrays.

Quantum enters storage software with Myriad

Quantum has written a unified and scaleout file and object storage software stack called Myriad, designing it for flash drives and using Kubernetes-orchestrated microservices to lower latency and increase parallelism.

The data protector and file and object workflow provider says Myriad is hardware-agnostic and built to handle trillions of files and objects as enterprises massively increase their unstructured data workloads over the next decades. Quantum joins Pure Storage, StorONE and VAST Data in having developed flash-focused storage software with no legacy HDD-based underpinnings.

Brian Pawlowski, Quantum
Brian Pawlowski

Brian “Beepy” Pawlowski, Quantum’s chief development officer, said : “To keep pace with data growth, the industry has ‘thrown hardware’ at the problem… We took a totally different approach with Myriad, and the result is the architecture I’ve wanted to build for 20 years. Myriad is incredibly simple, incredibly adaptable storage software for an unpredictable future.”

Myriad is suited for emerging workloads that require more performance and more scale than before, we’re told. A Quantum technical paper says that “for decades the bottleneck in every system design has been the HDD-based storage. Software didn’t have to be highly optimized, it just had to run faster than the [disk] storage. NVMe and RDMA changed that. To fully take advantage of NVMe flash requires software designed for parallelism and low latency end-to-end. Simply bolting some NVMe flash into an architecture designed for spinning disks is a waste of time.”

Software

The Myriad has four software stack layers: 

Quantum Myriad software layers
Myriad software layers. Items with an asterisk are roadmap items

Using NFS v4, incoming clients access a data services layer providing file and object access, snapshots, clones, always-on deduplication and compression and, in the future, replication, a data catalog and analytics. These services use a POSIX-compliant filesystem layer which is fully distributed and can be composed (instantiated) per user, and per application. 

Linux’s kernel VFS (Virtual File System) layer enables new filesystems to be “plugged in” to the operating system in a standard way and enables multiple filesystems to co-exist in a unified namespace. Applications talk to VFS, which issues open, read and write commands to the appropriate filesystem.

Underlying this is a key/value (KV) store using redirect-on-write technology, not overwriting, which enables the snapshot and clone features. The KV store is lock-free, saving computational overhead and accessing client process time, as well as being self-balancing and self-healing. Files are stored as whole objects or, if large, split into smaller separate objects. This KV store uses dynamic erasure coding (EC) to protect against drive failures. 

Metadata is stored in the KV store as well. Generally metadata objects will not be deduplicated or compressed but users’ files will be reduced in size through dedupe and compression. The KV store provides transaction support needed for the POSIX-compliant filesystem.

The access protocols will be expanded to NFS v3, SMB, S3, GPU-Direct and a proprietary Quantum client in the future.

These four software layers run on the Myriad data store, which itself is built from three types of intelligent nodes interconnected by a fabric.

Hardware 

Quantum Myriad architecture

The Myriad architecture has four components, as shown in the diagram above and starting from the top:

  • Load Balancers to connect the Myriad data store to a customer’s network. These balance traffic inbound to the cluster, as well as traffic within the cluster. 
  • 100GbitE network fabric to interconnect the Myriad HW/SW component systems to each other and provide client access.
  • NVMe storage nodes with shared-nothing architecture; processors and NVMe drives. These nodes run Myriad software and services. Every storage node has access to all NVMe drives across the system essentially as if they were local, thanks to RDMA. Incoming writes are load balanced across the storage nodes. Every node can write to all the NVMe drives, distributing EC chunks across the cluster for high resiliency. 
  • Deployment node which looks after cluster deployment and maintenance, including initial installation, capacity expansion, and software upgrades. Think of it as the admin box.

The servers and switches involved used COTS (Commercial Off The Shelf) hardware, x86-based in the case of the servers.

Drive and node failure

The dynamic erasure coding provides protection against two drives or node failures; a +2 safety level. The “EC Spread” equals the number of drives plus 2. Data is written into zones, parts of a flash drive, and zones across multiple drives and nodes are grouped into zone sets. In a five-node system there will be a 3+2 EC spread scheme while in, say, a nine-node system there will be a 7+2 EC spread. 

Quantum Myriad plus 2 EC safety level scheme
Myriad +2 EC safety level scheme

As the number of nodes increase, say, from 9 to 11, new zone sets are written using the new 9+2 scheme. All incoming data is stored in the new 9+2 zone sets. In the background, and as a low-priority task, all the existing 7+2 zone sets are converted to 9+2. The reverse happens if the node count decreases. If a drive or node fails then its data contents are rebuilt using the surviving zone sets.

In the future the EC safety level will be selectable.

Myriad management

Myriad is managed through a GUI and a cloud-delivered portal featuring AIOps capabilities. A Myriad cluster of any size can be accessed and managed via a single IP address. 

Quantum Myriad GUI
Myriad GUI

The Myriad software has several management capabilities:

  • Self-healing, self-balancing software for in-service upgrades that automatically rebuilds and repairs data in the background while rebalancing data as the storage cluster expands, shrinks and changes.
  • Automated detection, deployment and configuration of storage nodes within a cluster so it can be scaled, modified or shrunk non-disruptively, without user intervention.
  • Automated networking management of the internal RDMA fabric so managing a Myriad cluster requires no networking expertise.
  • Inline data deduplication and compression to reduce the cost of flash storage and improve data efficiencies.
  • Data security and ransomware recovery with built-in snapshots, clones, snapshot recovery tools and “rollback” capabilities.
  • Inline metadata tagging to accelerate AI/ML data processing, provide real-time data analytics, enable creation of data lakes based on tags, and automate data pipelines and workflows.
  • Real-Time monitoring of system health, performance, capacity trending, and more from a secure online portal by connecting to Quantum cloud-based AI Operations software

Traditional monitoring via SNMP is available, as well as sFlow. API support is coming and this will enable custom automation.

Myriad workload positioning

Quantum positions Myriad for use with workloads such as AI and machine learning, data lakes, VFX and animation, and other high-bandwidth and high-IOPS applications. These applications are driving growth in the market for scale-out file and object storage, which is expected to grow to be a $15.7 billion market by 2025, according to IDC.

We understand that, over time, existing Quantum services, such as StorNext, will be ported to run on Myriad.

We have not seen performance numbers for Myriad or pricing. Our understanding is that Myriad will be generally competitive with all-flash systems from Dell (PowerStore), Hitachi Vantara, HPE, NetApp, Pure Storage, StorOne and VAST Data. A Quantum competitive positioning table confirms this general idea:

As the Myriad software runs on commodity hardware and is containerized, it can, in theory, run on public clouds and so give Quantum a hybrid on-prem/public cloud capability.

Myriad is available now for early access customers and is planned for general availability in the third quarter of this year. Get an architectural white paper here, a competitive positioning paper here, and a product datasheet here.

Bootnote

Quantum appointed Brian “Beepy” Pawlowski as chief development officer in December 2020. He came from a near three-year stint as CTO at composable systems startup DriveScale, bought by Twitter. Before that he spent three years at Pure Storage, initially as a VP and chief architect and then as an advisor.  Before that he ran the FlashRay all-flash array project at NetApp. FlashRay was canned in favor of the SolidFire acquisition. Now NetApp has pulled back from SolidFire and Beepy’s Myriad will compete with NetApp’s ONTAP all-flash arrays. What goes around comes around.

Beepy joined NetApp as employee number 18 in a CTO role and was at Sun before NetApp.

WANdisco CEO and CFO step down as chairman becomes CEO

Active data replicator WANdisco’s CEO Dave Richards and CFO Erik MIller have quit, with interim chair Ken Lever running the company.

The company said in a statement this morning: “The Board changes are not connected to the findings to date of the independent investigation.”

It also confirmed the probe had found “purchase orders giving rise to recognised revenue of $14,936,215 for FY22 are false and that sales bookings of $115,461,616 recorded in
FY22 are also false.”

Accordingly, “revenue for the financial year to 31 December 2022 should have been $9.7 million (unaudited) as compared with” the $24 million it forecast and bookings should have been $11.4 million
rather than $127 million, it said.

“The results of the independent investigation to date continue to support the initial view that the irregularities are as a result of the actions of one senior sales employee. FRP Advisory is continuing to pursue the investigation to reach a conclusion.”

Dave Richards

This is WANdisco co-founder Richards’ second departure. He was axed in September 2016 by the then-chairman Paul Walker for under-delivering. Back then he resisted the move, with institutional shareholders backing him, and was reinstated as CEO and also chairman, with Walker walking away.

WANdisco was started up in 2005 with technology to replicate live data from the sites where it was generated to a datacenter in real time. It went public on the AIM part of the London Stock Exchange in June 2012 at £199 ($246) /share. Six months later the share price had soared to £1,520 (c $1,879) and WANdisco’s market capitalization went past $500 million. But then a string of poor results depressed the share price to £284 (c $350) in March 2015 and then down to £176.50 (c $218) in October 2016. That caused Walker to act by pushing out Richards, with the-then CFO Erik Miller resigning at the time. Yes, the same Erik Miller who has just resigned; it’s the second departure for him too.

Google Finance chart showing WANdisco share price rollercoaster ride.

The current troubles at the company started on March 9, with WANdisco announcing its AIM share trading suspension due to discovering sales reporting irregularities concerning purchase orders, revenue and bookings, and even “potentially fraudulent” irregularities concerning purchase orders by a senior sales employee. Forensic accounting firm FRP Advisory was hired to investigate and two independent directors formed a board committee to oversee this.

WANdisco annual revenues. It is a Jersey-based company and the Statutory Loss term is somewhat equivalent to a GAAP net loss. There is no profit/loss figure for 2022 as the sales reporting problems have prevented it being calculated

After investor unrest, Richards stepped down from the chairmanship on March 22 with Ken Lever appointed as interim and non-exec board chairman. He’s now executive chairman. According to its latest regulatory filings, WANdisco said Richards and Miller’s departures were due to a need for new leadership, so as to progress to lift the share suspension, and were not connected with the FRP Advisory findings. Professional services and turnaround firm Alvarez and Marsal is working on getting the share price suspension rescinded.

Lever will be act CEO for the time being and Ijoma Maluza becomes the interim CFO, starting on April 11. Lever said: “Over the years David and Erik have contributed significant time and effort to establishing and developing WANdisco. They remain meaningful shareholders in the business and continue to believe in the long-term, successful future for this company and its unique technology.”

Richards said: “I am sad to be leaving WANdisco after 18 extremely enjoyable years. I remain a passionate supporter and significant shareholder of the Company.”

WANdisco customer sales announcements

WANdisco has habitually announced sales contracts with unidentified customers. Here’s a WANdisco timeline and contract reporting list based on FT information:

  • June 2012: WANdisco floated on London’s AIM with a market capitalisation of £37 million.
  • November 2012: WANdisco buys California startup AltoStor and its Haloop open-source SW for synchronising information for $5.1 million.
  • November 2014: WANdisco announced a contract with an unidentified US-based financial services firm.
  • January 2015: WANdisco announced a contract extension price $750,000 with British Gas for smart meter analytics.
  • December 2015: WANdisco announced a contract with a brand new buyer in financial services and a scale-up contract with a telco customer. Neither were identified. 
  • June 2016: WANdisco raised $15 million with a share operation
  • September 2016: David Richards fired.
  • October 2016: Richards reinstated and much of board resigns.
  • September 2018: WANdisco announced an preliminary $200,000 contract win with a serious automotive buyer which is not identified.
  • November 2018: WANdisco announced a $1 million contract with an unidentified Chinese communications supplier.
  • December 2018: WANdisco announced a $700,000 contract with an American ealthcare firm, again not identified. 
  • February 2019: WANdisco raised $17.5 million with a share operation.
  • April 1019: WANdisco announced a contract worth $2.15 million with an undisclosed Chinese ICT infrastructure supplier.
  • June 2019: WANdisco announced a contract worth $750,000 with unidentiofied CHinese mibile phone handset supplier.
  • November 2019: WANdisco announced a distribution contract of with African Micro-D but no sales value was revealed.
  • November 2019: WANdisco announced a contract valued over $500,000 with an unidentified Fortune 500 customer.
  • December 2019: WANdisco announced contracts valued at round $1 million with an unidentified big US finance business and an unidentified a worldwide hi-tech firm .
  • December 2019: WANdisco announced a c$1 million contract with a China-based customer it doesn’t identify.
  • November 2020: WANdisco announced a deal worth up to $1 million with an unidentified  British grocery retailer.
  • November 2020: WANdisco announced a contract worth $3 million with unidentified global telco.
  • September 2021: WANdisco announced a contract worth a minimum of $1 million with one world’s largest but unidentified telecommunications firms.
  • December 2021: WANdisco announced a contract valued at a minimum of $6 million over 5 years with a big but unidentified European automotive parts provider.
  • December 2021: WANdisco announced a contract with an unidentified UK banking group.
  • December 2021: WANdisco announced a $3.3 million contract with an undisclosed big North American multinational financial institution.
  • March 2022: WANdisco announced a contract worth $1.5 million with a unidentified international telecommunications firm.
  • March 2022: WANdisco announced a follow-on contract for $1.2 million with another undisclosed international communications firm.
  • April 2022: WANdisco announced a contract worth $630,000 with one of largest insurance firms in Europe, with an unrevealed identity.
  • April 2022: WANdisco announced a contract worth $720,000 with an international retailer it doesn’t identify.
  • April 2022: WANdisco announced a contract worth $213,000 with a PC vendor it doesn’t identify.
  • June 2022: WANdisco announced a contract worth $11.6 million with an international communications firm it doesn’t identify.
  • June 2022: WANdisco announced two contracts for a total of $2.5 million with unrevealed information and communications suppliers.
  • July 2022: WANdisco announced a contract valued at $1.1 million with an undisclosed Canadian financial services business.
  • September 2022: WANdisco announced a contract worth $25 million with an international communications firm it doesn’t identify.
  • October 2022: WANdisco announced a follow-on contract for $7.1 million with a big European automotive parts provider it doesn’t identify.
  • December 2022: WANdisco announced a follow-on contract for $13.2 million with another large European automotive parts provider it doesn’t identify.
  • December 2022: WANdisco announced a contract worth $31 million with an unidentified international teleco.
  • December 2022: WANdisco announced a contract worth $12.7 million with a worldwide European automotive producer it doesn’t identify.
  • January 2023: WANdisco announced a contract worth $6.6 million with an undisclosed European-based international telco service supplier.
  • March 2023: WANdisco announced its share suspension.

Some of the 2022 claimed contract wins will now be null and void, but we don’t know which yet.

Back up a minute.. what day is it?

Comment: Today is World Backup Day and every backup supplier in the business is putting out marketing scare messages warning of: “Ransomware! Attacks! Back up your data or else.”

We have had such a concentrated and continuous barrage of messages about the need to back up data over the past umpteen years that, surely, no one is unaware of the need to do so. Ok, there may be some laggards.

World Backup Day started out as low-level idea conceived by Youngstown State University student Ismail Jadun, a Reddit contributor, 12 years ago. The aim was to raise backup awareness on the day before April Fool’s Day to stop people making idiots of themselves by losing data they hadn’t backed up.

People with data on storage drives were encouraged to make a pledge: “I solemnly swear to back up my important documents and precious memories on World Backup Day, March 31st.”

The idea spread and spread, so much so that it’s now been co-opted by corporate suppliers as a way of selling more product and services.

World Backup Day deals video on YouTube

But daily exposure to costly ransomware horror stories or lost SaaS app customer data is far a more effective way of raising awareness. We fear ourselves becoming a victim and investigate immutable backups at once.

There’s no need for us to be blitzed by backup product and service suppliers energetically telling us to back up our data. We’ve heard it all before. It’s just like sitting through a safety briefing on aircraft: ‘It’s vitally important that you listen and pay attention as this might be a new aircraft and things could have changed since your previous flight” – like the life jackets are now in the overhead lockers.

Everyone sighs and carries on checking their mobile phone messages while the flight attendants continue through the safety briefing: “Pull the toggle to inflate the jacket – but not until you are outside the aircraft. … And there’s a whistle for drawing attention”..…

The very ubiquity of the message kills its relevance and the repetition dulls the message. So it is with World Backup Day. Anyone who doesn’t back up their data deserves to lose it. End of.

Kioxia researchers devise hepta-level cell NAND

Kioxia NAND researchers say they have proven that Hepta Level Cell NAND with 7 bits per cell is a workable possibility.

After demonstrating 6-bit hexa-level cell flash in August 2021 using NAND cooled to the boiling point of liquid nitrogen (77 K or -196° C) the researchers have upped their game by one more bit, inventing a cell with 128 voltage states. 

  • SLC – 1 bit/cell – 2 voltage (Vth) states
  • MLC – 2 bits/cell – 4 states
  • TLC – 3 bits/cell – 8 states
  • QLC – 4 bits/cell – 16 states
  • PLC – 5 bits/cell – 32 states
  • HLC – 6 bits/cell – 64 states
  • ?     – 7 bits/cell – 128 states

Cryogenic cooling worked for 6-bit cells but more was needed for 7-bit ones. They combined the same cryogenic cooling with new silicon process technology to improve cell readability.

Existing NAND uses poly-silicon for the channel in a memory cell transistor. The read process detects the bit value of the cell by determining the threshold voltage (Vth). As the number of bit levels increase the amount of read noise increases too, blurring the read signal. The poly-silicon was replaced by the Kioxia researchers with single-crystal silicon, grown epitaxially, which reduces the read noise by two thirds compared to poly-silicon.

If such 7 bits/cell NAND has acceptable endurance then it might make it into production. This could have a significantly lower bit cost, the researchers say, taking the cryogenic cooling into account. Getting the technology to work at room temperature would be the big win.

Japanese startup Floadia is also working on 7-bits/cell technology but applied to AI with a SONOS (silicon–oxide–nitride–oxide–silicon) architecture Compute-in-Memory chip. This stores neural network weights in non-volatile memory and executes a large number of multiply-accumulate calculations in parallel by passing current through the memory array. It is aimed at microcontrollers in edge locations. 

The Kioxia researchers have inadvertently created an acronym problem as the HLC (Hexa-Level Cell) term was used by them for the 6 bits/cell tech, and 7-bits/cell would logically be Hepta-Level Cell, also HLC. Oops. 

Septi-Level Cell is out, because we already have SLC, as in Single Level Cell. Oh dear.

Bootnote

We have to confess to a degree of tardiness here as we have only just discovered the Kioxia paper:

H. Tanaka, Y. Aiba, T. Maeda, K. Ota, Y. Higashi, K. Sawa, F. Kikushima, M. Miura, and T. Sanuki, “Toward 7 Bits per Cell: Synergistic Improvement of 3D Flash Memory by Combination of Single-crystal Channel and Cryogenic Operation,” 2022 IEEE International Memory Workshop (IMW), 2022, pp. 1-4

It was presented at the International Memory Workshop (IMW) in May 2022 and you have to negotiate a paywall to read the text.

Kioxia and WD’s BiCS 8 tech takes YMTC route: Separately fabs NAND control logic and cell stacks

Kioxia and Western Digital have devised 218-layer 3D NAND technology with separately fabricated control logic and NAND cell dies bonded together.

Traditionally the control logic and cell logic are fabricated monolithically on one wafer. SK hynix calls its version of this fabrication Peri Under Cell (PUC) while WD and Kioxia have referred to it as CUA (Circuit Under Array). YMTC has had a different and, until now, unique, Xtacking design, in which the peripheral logic circuitry was fabricated separately from the NAND cells and bonded to the top of the NAND cell stack. This enabled it to have more freedom in developing the control logic circuitry. Now Kioxia and WD have followed suit with what they call their BiCS 8 technology.

Kioxia CTO Masaki Momodomi said in a statement: “Through our unique engineering partnership, we have successfully launched the eighth-generation BiCS FLASH with the industry’s highest bit density. I am pleased that Kioxia’s sample shipments for limited customers have started.”

WD SVP of Technology & Strategy Alper Ilkbahar added: “By working with one common R&D roadmap and continued investment in R&D, we have been able to productize this fundamental technology ahead of schedule and deliver high-performance, capital-efficient solutions.”

The BICs 8 layer count of 218 looks slightly underwhelming compared to SK hynix’s 238-layers, Samsung’s 238-layer NAND, Micron’s 232-layer technology and YMTC’s 232 Xtacking layer count.

B&F NAND suppliers’ layer count generations table

Kioxia and WD claim they can get increased cell density in their 218-layer chips because they shrink the cells both laterally and vertically. They say this produces greater capacity in a smaller die with fewer layers at an optimized cost. 

The 238-layer chips have 1 terabit capacity with either TLC (3bits/cell) or QLC (4bits/cell) formatting. The chip’s IO rate is 3.2Gbps, 60 percent more than prior BiCS 6 chip’s 162-layer NAND. There is also a a 20 percent write performance improvement and lower read latency compared to BiCS 6 technology. 

An intervening BiCS 7 technology was abandoned by Kioxa and WD, with the CUA development and cell shrinkage enabling BiCS 8.

We might expect to see SSDs using BiCS 8 technology in a quarter or two.

TL:DR summary: Kioxia and WD say it’s not just the layer count that matters but the cell density size and layer count together. This way they get more bits in a smaller more cost-effective chip than their layer-count focused competitors, they claim.

Bootnote

Neither Kioxia nor WD have issued any imagery or diagrams illustrating their BiCS 8 technology. Neither did Kioxia CTO Liu Maozhi discuss BiCS 8 at the 2023 China Flash Memory Summit.

Pliops looking to move onto BlueFields

Pliops XDP
Pliops XDP

Accelerated NVMe JBOF controller supplier Pliops is thinking about porting its software to NVIDIA’s BlueField DPUs.

Pliops supplies its XDP (eXtreme Data Processor) to offload low-level storage stack processing from a host x86 server, enabling it to run applications faster. This has had specific code added to it to run RocksDB storage IO faster and also to function as an efficient RAID controller. It’s sold to hyperscaler and near-hyperscaler customers with thousands of servers for whom extra performance means more cores can run application code.

Tony Afshary, Pliops’ Global VP for Products & Marketing, told an IT Press Tour briefing audience: “We want to sell data services, not computational storage.” 

The orange items are coming with XDP 2.0

The data services, like RAID, make XDP products easier to sell. Pliops reckons the XDP is not a SmartNIC and nor is it a DPU as it is specifically designed to function as an NVMe SSD and JBOF controller. Data services give XDP cards a familiar application identity; RAID, key:value (KV), compression, etc. People understand a RAID card. When you move up to key:value stores then it’s harder work, but still in a familiar environment to RocksDB users. 

Afshary said Pliops’ XDP was faster than GRAID’s SupremeRAID card at the 4 to 8 drive level. Pliops’ RAID performance makes RAID 5 usable as its performance penalty is reduced. The XDP can RAID protect data better than other HW or SW RAID products, and rebuild drives faster as well, both HDDs and SSDs. The compression means both SSD endurance and effective capacity are increased.

The current XDP product generation uses FPGA technology but the next generation will use ASICs and be faster, smaller and lower cost, Pilops told us. They both accelerate compression, KV store operations and RAID processing through a mix of hardware and software acceleration. For example its software provides better than RocksDB storage functions. But there is a hardware-based KV storage engine. Afshary said: “Think of it as RocksDB on a chip.” 

The XDP does not do deduplication though, with Afshary saying: “Somebody above us in the stack does it better.”

Afshary claimed a Pliops customer could take the Pliops acceleration and use it in two ways. One is to have applications run faster in the host server, and the other is to replace the host server CPU with a less costly lower-core count server but still have the same performance level as before. If a hyperscaler with thousands of servers can provide the same level of service with lower core count servers and Pliops XDPs and thereby lower their costs substantially then that is an attractive proposition.

Roadmap

Pliops was founded in 2017 and is very well-funded; it took in $100 million last year in a D-round.

The gen 1 XDP supports 128TB of NVMe SSD capacity but this will be doubled later this year to 256TB, thre company said. Virtual Volumes/Functions and QoS per Volume will be available with XDP software v2.0. Redis announcements are coming next month as well. 

Pliops would have an easier job selling its accelerating host offloading system if there weren’t alternatives such as SmartNICs and DPUs. It has to explain that its XDP is neither a SmartNIC nor a DPU but something unique. However DPUs with on-board CPUs can run external software.

Afshary said Pliops software could run on NVIDIA’s BlueField DPU with its Arm processor. Today you can use Bluefield-2 cards in front of a Pliops XDP but what we can look forward to is the Pliops code running on the Bluefield card. It won’t provide the full Pliops acceleration effect because Pliops’ hardware is missing but there will still be appreciable acceleration.

Another potential direction is to have a server running RocksDB or some other KV store, which currently sends KV strings encapsulated as blocks to the XDP, send KV strings directly to the XDP. Pliops is compatible with the NVMe-KV standard. This would provide even more KV acceleration.

Storage news roundup – 30 March 2023

Avid‘s NEXIS | F2 Solid State Drive (SSD) flash storage engine is available now. It accelerates media workflows, including finishing of 4K, 8K and HDR content, color grading, VFX and animation. Capacity scales from 38.4TB to 307.2TB per engine, which is >6GBps with media packs. Media protection and high availability are achieved with a redundant storage controller and hot spare SSDs. Dual redundant 100 Gbps Ethernet connections per storage controller are standard. The NEXIS | F2 SSD is compatible with all current Avid NEXIS systems. When used with NEXIS online or nearline storage, F2 SSD administrators can seamlessly move a workspace between performance tiers, maintaining read and write access while the media is moving. 

William Blair analyst Jason Nader, who has written at length about Confluent and Apache Kafka in prior reports (our initiation and our deep-dive into Confluent Cloud), has concluded that the need for modern streaming software is accelerating as organizations look to use real-time data to build smarter applications and glean insights. Support for real-time capabilities is increasingly becoming table stakes across many industries. Through its support for the open-source software project Apache Kafka, Confluent has become the category leader in the data streaming ecosystem, having built an at-scale business for helping organizations harness their real-time data streams.

Lakehouse supplier Databricks announced the opening of offices in Tel Aviv and Zurich, in addition to expanding its footprint in  London and opening a new office in Munich. It also announced the coming availability of Databricks infrastructure in the AWS France (Paris) Region in the first half of 2023. The AWS-based Databricks infrastructure in France will support the growing customer base and demand for the Databricks Lakehouse.

The DNA Data Storage Alliance, a SNIA Technology Affiliate, formed in 2020 by Illumina, Microsoft, Twist Bioscience, and Western Digital,  announced the appointments of CATALOG Technologies and Quantum Corporation to its governing board. David Turek will represent CATALOG and Don Doerner and Turguy Goker will represent Quantum. The Alliance says it’s achieving its mission by: 1) educating the storage ecosystem and the public on this emerging technology; 2) identifying key technical challenges in the underlying technologies in order to drive funding and research which facilitate commercialization; and 3) developing standards and specifications (e.g. encoding, physical interfaces, retention, file systems) that enable the emergence of an interoperable DNA data storage product ecosystem.

An EU-funded 10-party consortium from Spain, France, Italy, Finland, Israel and Switzerland is working on a 3-year EXTRACT project to provide a distributed data-mining software platform for extreme data across the compute continuum (edge, cloud and high-performance computing (HPC) environments). Deriving value from raw data requires the ability to extract relevant and secure knowledge. Current practices and technologies are only able to cope with some data characteristics independently and uniformly. The aim of EXTRACT is to create a complete edge-cloud-HPC continuum by integrating multiple computing technologies into a unified secure compute-continuum. It will do so by considering the entire data lifecycle, including the collection of data across sources, the mining of accurate and useful knowledge and its consumption. It will be validated in two real-world use-cases:

  • A Personalized Evacuation Routing (PER) System will serve to guide citizens in an urban environment (the city of Venice) through a safe route in real time. The EXTRACT platform will be used to develop, deploy and execute a data-mining workflow to generate personalized evacuation routes for each citizen, displayed in a mobile phone app, by processing and analysing extreme data composed of Copernicus and Galileo satellite data, IoT sensors installed across the city, 5G mobile signal, and a semantic data lake fusing all this information.
  • The Transient Astrophysics with a Square Kilometer Array Pathfinder (TASKA) case will use EXTRACT technology to develop data mining workflows that effectively reduce the huge amount of raw data produced by NenuFAR radio-telescopes by a factor of 100. This will allow the populating of high-quality datasets that will be openly accessible to the astronomy community (through the EOSC portal) to be leveraged for multiple research activities.

IBM says Storage Fusion will provide application resilience on Red Hat OpenShift with Fusion 2.5, which has a GA date of 30th March 2023. The enhancements are:

  • Fusion Software gets Data Foundation Advanced features, Metro DR (with Data Foundation + Red Hat ACM) and Fusion on zCX (with Data Foundation)
  • Fusion HCI System gets Rack-less HCI System (bring your own rack), 3-zone HA cluster, and run multiple OpenShift clusters on HCI System with IBM Cloud Satellite. 
  • Fusion Software and HCI System get backup to IBM Storage Protect (SP), offload backup from SP to tape or disk. They integrate Discover as a Fusion Cataloging Service, and improve the usability and resiliency of the Fusion Backup/Restore service. 

More details here.

MariaDB has a new version of its managed SkySQL database with autoscaling which scales resources up when demand surges and back down when demand normalizes to save costs. It has also introduced serverless analytics to uncover insights on current data without the need for ETL and while paying for only what is used. SkySQL enables autoscaling of both compute and storage in response to changes in demand. Rules specify when autoscaling is triggered, for example when CPU utilization is above 75 percent over all replicas sustained for 30 minutes, then a new replica or node will be added to handle the increase. Similarly, when CPU utilization is less than 50 precent over all replicas for an hour, nodes are downgraded. Users always specify the top and bottom threshold. It works with Xpand, MariaDB’s distributed SQL database, adding and releasing nodes as needed.

Language searcher and vector database builder Nuclia is adding generative AI to its product. It says that with Nuclia generative answers you will be able not only to get semantic results but also answers based on company documents. Imagine a ChatGPT-like functionality for your internal information. Users will be able to query Nuclia in (almost) any language and get the answer in the same language they queried. They’ll be able to generate reports based on internal documents. This tutorial tells you more.

TerraMaster D6-320.

TerraMaster has released a new D6-320 6-bay external hard disk enclosure with USB 3.2 Gen 2 protocol, 10Gbps data transmission bandwidth, and up to 132TB (22TB x 6 drives) capacity. The Read/write speed is up to 1,030 MBps with 6 x WD Red 8TB drives. With a single SSD (WD Red 1TB) read speed could reach 510 MBps. D6-320 is equipped with a USB Type-C interface and a USB type C cable (1 meter in length), and could be compatible with a variety of computer interfaces: USB 3.0/USB 3.1/USB 3.2/USB4 /Thunderbolt 3/Thunderbolt 4 (to connect it to the USB type-C interface on your computer, you need to purchase another USB type C-type C cable).

Cloud storage provider Wasabi is partnering IBM with its Cloud Satellite to allow enterprises to run applications across any environment – on-premises, in the cloud or at the edge – and enable users to cost-efficiently access business data and analytics in real time. Cloud Satellite provides a distributed cloud architecture that brings the scalability and flexibility of public cloud services to the applications and data that run in a user’s private cloud. Boston Red Sox baseball team is a user. The Red Sox plan to use Wasabi cloud storage across its hybrid cloud infrastructure while piloting IBM’s Cloud Satellite to house data including player video, analytics, surveillance data, IoT, and more, across the Fenway Park stadium.

Startup Finout offers FinOps to stop SaaS spend disasters

Sticking an Ops on the tail end of abbreviated things like development and data and finance is cool in these containerized cloud-native times, as in DevOps and DataOps and FinOps.

One startup is positioning it as the best way to deal with the uncontrollable chaos that is cloud-delivered service spending, the catastrophically complicated world of IaaS, PaaS and SaaS. The pay-for-what-you-use, spin up instances with a credit card experience of AWS et al.

It can be complicated dealing with business travel. Dealing with airlines and their fares, classes, timetables and flight destinations is a royal pain in the neck. That’s why intermediaries like AMEX Travel Services have sprung up. These analgesics of the business travel industry take much of the pain away. They’ll arrange flights and give you spending reports for budget checking and planning and turn air travel chaos into something approaching normality.

Paying for cloud service is on another level. Sticker shock is endemic. Instead of dealing with thousands of flights and fares and whatnot you’re faced with millions of instances and costs across all the public cloud and SaaS providers. This is nickel and diming on an interplanetary scale. 

Finout’s pitch is that we need organizations equivalent to AMEX TS that understand this nightmarish Byzantine cloud-native system of charging and make sense of it so business finance department staff don’t get the daily shock of: “You spent how much?” and trying to explain to the CFO why the monthly cloud spend exceeded budget by 350 percent – again!

Startup Finout’s co-founder, CEO Roi Ravhon, experienced this constantly irritating need to explain and justify cloud service spending when working for previous companies. He told us there were no tools to do it and he had to dig deep into the charging practices and invoices of the many as-a-Service suppliers his employer used to understand why bills were so high and how the spend had taken place.

And then he realized that the cloud-native, DevOps environment had contributed to cloud charging and cost confusion and it could help solve it too. He and co-founder Chief Product Officer Asaf Liveanu started Finout in Tel Aviv in 2021. They raised $4.5 million seed funding that year and ran a $14 million A-round in 2022, to raise $18.5 million in two years, plus non-equity assistance from Deloitte Launchpad to develop their software very fast. The company told us it has dozens of paying customers already.

What they tapped into was a need for an intermediary cloud cost handling company. They told an IT Press Tour that Netflix has a huge FinOps department, as does Tesla. The whole FinOps area is so primitive compared to business travel services, but it’s also dealing with multi-million dollar cloud spending totals. Financial discipline and control is desperately needed.

Finout is a SaaS service, based on AWS, that presents its clients with a single bill totalling all their cloud spend, which it says can be microscopically analyzed through a customizable dashboard to identify who spent how much on what services; AWS instances, snowflake analytics runs, Datadog, and Kubernetes. 

It has these suppliers’ pricing and charging policies built in, and can bill elements to a client company’s departments or by a development project, along with its views of how they are ordering and using cloud services. Finout’s CostGuard function provides an idle resource locator and recommendations. It can identify little or un-used cloud resources and alert clients about this. For example, if a customer is paying for a Redshift database instance they’re not using, or it can identify unused cores and memory in Kubernetes deployments. Clients can then stop provisioning them and save money.

Finout RDS spending chart

The company says it can enrich cost data to show how costs of internal business things like teams, functions, and SaaS elements are taking place and changing. Virtual tagging can provide showback items for internal chargeback. It can be told about a client company’s profit parameters and relate service spend items, such as developing a product variation, to profit goals. This is useful to MSPs and SaaS suppliers as well as biz users. They could calculate, for example, profitability per customer or per project.

The software provides cost anomaly detection, noting a sudden ramp up in cloud service item spending cost compared to a baseline with daily run levels and policy-set parameters. 

Finout supports AWS, Kubernetes and Datadog now. It will soon support GCP and Azure, and Snowflake. SAP support is a roadmap item.

It doesn’t make recommendations across cloud providers; this is a possible future roadmap item. Rav-hon said: ”We don’t want to fight with AWS or whoever. It’s going to a while before it comes.”

He told us: “We can integrate new source vendors to Megabill in a few hours, ie, adding Databricks. We have a base cost model and adding a connector to new vendors is relatively easy.”

Finout dashboard.

Finout plans to support major vendors, but not tens of thousands of individual SaaS suppliers. Customers can add vendors to their Megabill if they wish, and a SaaS supplier could contact Finout directly about this.

In effect, Roi Ravhon has built a tool that tries to solve a cloud service Opex supply cost calculation problem he had in a previous company. Finout is intended to enable Capex style budget control for Opex cloud services. It could be the equivalent of Amex Travel Services in the cloud services spending world and enable organizations to worry much less about rampant cloud service spend overruns.

Samsung forecasts petabyte SSD in a decade 

Think a 128TB SSD is a big brother of a device? How about a petabyte SSD? Samsung sees it coming in the not-too-distant future.

The 2023 China Flash Memory Market Summit (CFMS2023) took place in Shenzhen this month and saw Samsung, Solidigm, Micron and many other NAND flash vendors present their wares to Chinese buyers. Local media outlet A&S Mag summarised the many vendor sessions. 

The standout pitch came from Samsung VP Kyung Ryun Kim, NAND Product Planning Group GM, who said the company expects single SSD capacity to be as high as 1PB in the next 10 years. Samsung exhibited a prototype 128TB SSD at last year’s August Flash memory Summit in San Jose, calling it a petabyte “scale” SSD. This used QLC (4 bits/cell) NAND, supported zoned name spaces and came with a PCIe/NVMe interface. 

Samsung 128TB SSD.

Seven months later and Samsung is trailing the idea of increasing the density tenfold, based on improved 3D NAND physical scaling technology, logical scaling technology and packaging technology.

Samsung is currently developing 238-layer 3D NAND tech and reckons that 1,000 layers are feasible. It is pushing QLC formatting and we may well see PLC (Penta Level Cell – 5 bits/cell) technology being commonplace in the 2030s, so a 1,000 TB SSD could be possible.

Micron talked about a new high-performance PCIe 5.0 SSD, with memory bandwidth twice that of PCIe 4.0. VP Dinesh Bahal said that if it can be combined with new application acceleration technology, the actual performance will be faster still.

Kioxia CTO Liu Maozhi discussed Kioxia’s gen 2 XL-Flash MLC technology with a PCIe 5.0 interface. He talked about Copy OFFLOAD and RAID OFFLOAD functions simplifying the parity check process without taking up host CPU resources. That suggests this storage-class memory SSD will be more than a dumb device.

Solidigm Asia Pacific sales director Ni Jinfeng introduced the world’s first PLC NAND SSD prototype. He said Solidigm believes PLC’s higher density and lower cost will be the foundation for a new round of HDD replacement in the future.

We expect that these Samsung, Micron, Kioxia and Solidigm ideas will be discussed and explained at August’s Flash Memory Summit in San Jose.

Storage drive channel hopping defeats attackers: NexiTech

NexiTech has taken the security idea of frequency-hopping radio systems and applied it to storage drive IO – such that its software hops between drive types to obstruct malware attacks.

The background is a military one. Static targets are easier to attack than dynamic, moving targets. This applies to military personnel and equipment, like tanks, and also military signalling systems, such as wireless, hence frequency hopping to obstruct wireless channel eavesdropping. Nextech’s software treats a drive type as an I/O channel and randomly hops between them so that an attacker expecting I/O to a storage drive with a file system actually sees the write channel changing to a tape target system, then an SSD, then iSCSI-accessed disk drive, a SATA SSD, and even make-believe drive types and decoy honeypot drives.

Nextech’s software maps the apparent, externally-visible drive type to the actual drive type during these random changes so that the legitimate internal I/Os complete as normal.

Founder and CEO Don Matthews presented this concept in a YouTube video in which he talked about the origin of his software. “Our first commercial product was a software interface layer that basically translated one storage related api into another. In the early days of Windows Microsoft included a particular type of storage API called ASPI (Advanced SCSI Programming Interface) and many vendors implemented application programs that used this interface including the US Air Force.” 

“In later versions of Windows Microsoft dropped that API in favor of a different one that they came up with, and so it was necessary to translate between the old one and the new one in order to provide backwards compatibility for those folks that had created applications that used the old API.”  

There was an internal Windows access security obstacle, but: “We found a way around … security restrictions from Microsoft and that means that we can now send any storage related command we like to any storage device. That’s a pretty powerful capability. If we can figure out how to do that someone else probably can too.”

Its software implements an AMTD (Automated Moving Target Defense) capability. The AMTD idea is discussed and explained in a 25-page Gartner report which also looks at suppliers. 

This is how Matthews explains Nextech’s AMTD capability: “At its core we use storage virtualization to create multiple abstractions of a device. We also change the device type. We have the capability to emulate any type of device we want so we could, for instance, take a disk device and make it look like a tape device or make it look like an unknown device.”

“So we’ve cloaked the device by changing its device type. The host operating system now has no idea how to talk to this thing because it’s no longer a disk drive. This alone is an effective ransomware protection by the way, because most ransomware attacks depend on the fact that there is a usable file system on the disk device. No file system, no ransomware attack.“

Micron bumps along bottom of the memory trough

Micron’s DRAM and NAND down cycle continues, with revenues falling in Q2 of its fiscal ’23 ended March 2, a massive loss posted, and a large inventory writedown recorded as the low pricing environment ran on.

Revenues plunged 52.6 percent to $3.96 billion and net loss was $2.3 billion, a reversal from last year’s $2.3 billion profit. That’s the largest loss we have recorded for Micron and our records go back to 2010. It reported a $1.03 billion loss for the entire fiscal 2012 year so this quarter’s loss is quite impressive.

Sanjay Mehrotra

Micron Technology President and CEO Sanjay Mehrotra’s said: “Micron delivered fiscal second quarter revenue within our guidance range in a challenging market environment. Customer inventories are getting better, and we expect gradual improvements to the industry’s supply-demand balance.” 

His prepared remarks include this observation: “The semiconductor memory and storage industry is facing its worst downturn in the last 13 years, with an exceptionally weak pricing environment that is significantly impacting our financial performance. We have taken substantial supply reduction and austerity measures, including executing a companywide reduction in force.” 

Micron’s losses deepen as the DRAM/NAND down cycle continues.

The job cuts disclosed in December 2022 were about 10 percent of the workforce but the updated plan is to reduce headcount by a total of 15 percent throughout this financial year.

Micron said it thinks the downcycle could be ending and a transition to sequential revenue growth is likely in its Q4. The company said it is confident in long term demand and also paying a quarterly dividend of $0.115/share but share buybacks are suspended.

Financial summary

  • Gross margin: -31 percent
  • Inventory write-down: $1.43 billion, impact of $1.34 per diluted share
  • Operating cash flow: $343 million vs $943 million for prior quarter and $3.63 billion a year ago
  • Free cash flow: -$1.81 billion
  • Cash, marketable investments, and restricted cash: $12.12 billion
  • Diluted earnings (loss) per share: -$2.12 vs $2.00 a year ago.

Micron said it hopes to return to positive quarterly free cash flow within fiscal 2024. AI could prompt growth and help that. Micron thinks large language models, or LLMs, such as ChatGPT, need significant amounts of memory and storage to operate. It said we’re in the very early stages of the widespread deployment of these AI technologies and potential exponential growth in their commercial use cases. 

Technology and inventories

On the technology front 1-alpha represents most of its DRAM bit production, and it is progressing in initiating its transition to 1-beta. Micron said it was also making good progress towards the introduction of the  EUV based 1-gamma node DRAM node. In NAND, 176-layer and 232-layer represented more than 90 percent of NAND bit production, with 20 percent of that QLC. The 1-beta DRAM and 232-layer NAND have reached targeted yields ahead of schedule and faster than any of its prior nodes. 

Is the bottom near? The DRAM revenue drop’s steepness slackens markedly and NAND’s decline slows too

In the datacenter segment, Micron reckons its revenue bottomed in the quarter. Datacenter customer inventories should reach relatively healthy levels by the end of calendar 2023. AI is a secular driver of demand growth in the data center. The product roadmap features HBM and CXL memory products. Micron is sample shipping CXL DRAM to OEM customers in the enterprise, cloud and high-performance computing (HPC) workload areas. 

PC client customer inventories have improved meaningfully, and it expects increased bit demand in the second half of the fiscal year. But PC and graphics units are forecast to decline by mid-single digit percentage over calendar 2023. Smartphone units are expected to be down slightly YoY in 2023 as well but, as with PCs, Micron expects growth in bit shipments in the second half of its fiscal year.

Micron said it is the market share leader in the auto and industrial segment, which accounts for 20 percent of its revenue and which actually grew year-on-year. Micron expects continued growth in auto memory demand for the second half of calendar 2023, driven by gradually easing non-memory supply constraints and increasing memory content per vehicle. 

Business unit revenue decline steepness is lessening.

Overall while the DRAM/NAND industry supply demand balance is expected to gradually improve, due to the high levels of inventories, industry profitability and free cash flow are likely to remain extremely challenged in the near term. Market recovery can accelerate if there is a year-to-year reduction in production, or in other words, negative DRAM and NAND industry bit supply growth in 2023. Micron is doing its bit, so to speak. Its year-on-year bit supply growth in calendar 2023 will be meaningfully negative for DRAM and it also expects to produce fewer NAND bits year-on-year in calendar 2023 too. 

Cost-cutting and prices

Against this background Micron has further reduced DRAM and NAND wafer starts, which are now down by 25 percent. It is cutting heads and has lowered production output.

Micron says it has a flat bit share strategy:While we have had to reduce price to remain competitive in the market, we have not done so in an attempt to gain share, as such share changes at customers are generally transitory.” 

Micron’s third quarter guidance is for revenues of $3.7 billion plus/minus $200 million which compares to $8.6 billion in Q3 revenues a year ago, a 57 percent drop. Profitability will be extremely challenged. Maybe this will be the bottom and the quarter after that start an upturn.