Joint venture partners Kioxia and SanDisk are previewing faster 218-layer 3D NAND with accelerated interface speed and better power efficiency, plus a forthcoming 332-layer technology.
Their current Gen 8 BiCS technology has 218 layers and was introduced last year. SK hynix broke the 300-layer barrier in November 2024 when it launched a 321-layer product, stringing three 100-plus-layer stacks together. Kioxia and SanDisk’s BiCS 9 generation will use new NAND interface technology in its CMOS logic layer, coupled to the existing 218-layer NAND cell package, for higher performance and lower power consumption. They are also revealing a future BiCS 10, with the new logic layer tech bonded to a 332-layer NAND unit. This 300-plus-layer tech was first mentioned at SanDisk’s investor day last week.
Kioxia Europe VP and CTO Axel Stoermann said: “Next to the demand for increased power efficiency in datacenters, data generation is set to vastly increase, driven by new AI technology-driven applications, with sophisticated operations such as inference at the edge and the application of transfer learning techniques further compounding storage requirements.”
SanDisk and Kioxia have increased the NAND chip interface speed by 33 percent in BiCS 9 by using the Toggle DDR6.0 interface plus the SCA (Separate Command Address) protocol. This has separate control and data paths that operate in parallel. Their tech also features PI-LTT (Power Isolated Low-Tapped Termination) technology in which “power sources for existing 1.2 V and additional lower voltage are utilized for the NAND interface power source. This reduces power consumption during data input/output” by 10 percent for input and 34 percent for output. Because of this, the BiCS 9 NAND chip interface speed should be 4.8 Gbps.
The combination of the 332-layer count and better planar cell layout increases bit density by 59 percent in the forthcoming BiCS 10 technology, compared to BiCS 8 and 9, the pair say.
A table shows the current layer counts for the six suppliers and their fabs:
Although Solidigm is owned by SK hynix, it has a separate fab infrastructure and technology. If we look at a chart of these supplier numbers, we can see seven generations of this technology, although the suppliers’ internal generation branding is different.
All suppliers except Solidigm broke the 200-plus-layer level in a sixth generation of their products in the 2022-2023 period. Now Kioxia-SanDisk are introducing a second 218-layer class product, one with the faster interface detailed above.
We also see that China’s YMTC (dark blue column) has had the highest layer count in each generation in our chart, since the third. It’s possible, if not probable, that it will head towards the 350 or greater level in its next technology leap to do the same again in our seventh generation category.
It’s also apparent that Solidigm, although producing 128 TB-class QLC SSDs, the highest capacity available, is using fifth-generation 192-layer product. We expect Solidigm to announce a fresh generation of 3D NAND that moves closer to or even into the 300-plus-layer area. Solidigm’s owner, SK hynix, is already at the 321-layer point.
Bootnote
Toggle mode NAND uses a double data rate interface for faster data transfers, and a multiplexer (MUX) manages the data lanes.
VAST Data has added block access to its existing file and object protocol support along with Kafka event broking to provide real-time data streaming to AI, ML, and analytic workloads.
VAST Data’s storage arrays, with their scale-out, disaggregated shared everything (DASE) architecture, support parallel access to file and object data, and have a software stack featuring a DataCatalog, DataBase, DataSpace, and DataEngine. The system already supports real-time notification of data change events to external Kafka clusters. Now it has its own Kafka event broker, added to the DataEngine, to receive, store, and distribute such events.
In a statement Aaron Chaisson, VP Product and Solutions Marketing at VAST, said: “With today’s announcement, we’re eliminating the data silos that once hindered AI and analytics initiatives, affording customers faster, more accurate decisions and unlocking data-driven growth.”
By providing block-level data access, VAST Data says it can now support classic structured data applications such as relational databases, SQL or NoSQL, ERP, and CRM systems along with virtualization (VMware, Hyper-V, KVM) and containerized workloads. The whole set of legacy structured data workloads is able to run with a VAST Data storage array, giving customers a chance to consolidate their block, file, object, tabular, and streaming storage onto a single storage system. The hope is that channel partners will pitch the migration of such block access-dependent workloads off existing block arrays – Dell PowerMax, Hitachi Vantara VSP One, and IBM DS8000, for example.
VAST is also supporting Boot from SAN, and says “enterprises can streamline server deployment and management by eliminating reliance on local disks.” It claims this approach “enhances disaster recovery, improves redundancy, and enables rapid provisioning of new virtual or bare-metal servers while ensuring consistent performance across IT environments.”
The event broking addition allows, VAST says, “AI agents to instantly act on incoming data for real-time intelligence and automation.”
The company says customers can have all data accessible in its single system, addressing all workloads within one unified architecture. It has “unified transactional, analytical, AI, and real-time streaming workloads” via the event broker. Customers can “stream event logs to systems for processing, publishing and processing telemetry data in real time, giving event-driven updates to users, and streaming data to models for real-time training or inference.”
VAST says Kafka implementations are widely used for data movement but “create isolated event data silos that hinder seamless analytics.” They involve infrastructure sprawl, data replication, and slow batch ETL processes “that delay real-time insights.” It’s new Event Broker can activate computation when new data points enter VAST’s DataBase. It should enable AI agents and applications to respond instantly to events and help automate decision-making. The Event Broker delivers, VAST claims, “a 10x+ performance advantage over Kafka on like-for-like hardware, with unlimited linear scaling, capable of processing over 500 million messages per second across VAST’s largest cluster deployments today.”
VAST Co-founder Jeff Denworth stated: “By merging event streaming, analytics, and AI into a single platform, VAST is removing decades of data pipeline inefficiencies and event streaming complexity, empowering organizations to detect fraud in milliseconds, correlate intelligence signals globally, act on data-driven insights instantly, and deliver AI-enabled customer experiences. This is the future of real-time intelligence, built for the AI era.”
All these methods of data access; block, file, object, tabular and streaming, can use VAST’s snapshot, replication, multi-tenancy, QoS, encryption, and role-based access control services. It claims that in the AWS cloud, customers would need 21 separate services to do what VAST does.
Competing systems that offer unified block, file, and object data access include Red Hat’s Ceph and StorOne. Both Quantum’s Myriad and HPE’s Alletra MP X10000 are based on key-value stores that have files or object access protocols supported and can be extended to add block or other protocols.
VAST’s support for block data will bring it into direct competition with Infinidat high-end SAN storage for the first time.
NetApp’s ONTAP arrays offer unified file and block access. However, NetApp found some of its all-flash customers preferred buying block-only ASA (SAN) arrays instead of classic ONTAP AFF arrays. They wanted to de-consolidate rather than consolidate, indicating that not all customers want a single, do-it-all unified array.
VAST has promised unified data access for some time, so we can envisage that many of its customers will look positively at moving block-based application data stores to their VAST systems.
Read more about the background to block data access support in a VAST blog. VAST’s Event Broker will be available in March.
Comment Storage is always in flux, from underlying media to new ways to access on-drive storage and specialized large language model databases. Workloads need more data. They need it faster. They need it for less money. And these are the three frustrations that have spurred the 18 developments that are transforming the storage landscape and making it possible to store more data, and access it faster and more cheaply.
We’ll start at the media level. Solidigm signalled the dawn of the very high capacity SSD movement with its P5336 61.44TB drive in July 2023, which used QLC NAND. This was followed by Samsung’s BM1443 last July, also using QLC flash with a 61.44TB capacity. But Solidigm upped the stakes, along with Phison, both launching 128 TB-class SSDs (122.88TB effective) in November. Solidigm used the PCIe gen 4 bus while Phison chose the faster PCIe gen 5. And, of course, we have Pure Storage with its proprietary flash drives offering 150TB of capacity with 300TB on its roadmap.
High-capacity NAND could also get closer to processors with Western Digital’s to be spun-off Sandisk talking about a High Bandwidth Flash (HBF) concept, with NAND dies stacked in layers above an interposer unit connecting them to a GPU, or other processor, in the same way as High Bandwidth Memory.
PLC (penta level cell) NAND with its 5 bits/cell capacity is not being productized for now. 3D NAND layer count increases, past the 300-layer level, provide sufficient capacity increase headroom with QLC (4bits/cell) technology.
Processors could escape memory capacity limits by using 3D DRAM, if it gets commercialized. The research and development projects in this area are multiplying with, for example, Neo Semiconductor. The Computer eXpress Link (CXL) area is fast developing as well, with processors getting the ability to access shared external pools of memory, witness UnifabriX and Panmnesia. We await widespread use of the PCIe gen 5 and 6 busses and CXL-capable system software.
There is a risk that CXL external memory pooling could fail to cross the chasm into mainstream adoption, like composable systems, as the actual benefits may turn out to be not that great.
The legacy hard disk drive market is undergoing a transition as well, to heat-assisted magnetic recording (HAMR) and raw capacities in the 3.2TB platters plus and greater area. This needs new formulations of magnetic recording media to stably retain very small bit areas at room temperature, with bit area laser heating needed to lower its resistance to magnetic polarity change. Seagate’s transition to HAMR is well underway, after many years of development, while Western Digital has just signalled its migration to HAMR in a year’s time. We understand the third HDD manufacturer, Toshiba, will follow suit.
We are looking at disk drives heading into 40 TB-plus capacities in the next few years, and retaining a 6x cheaper TCO advantage over SSDs out to 2030, if Western Digital is right in its assumptions.
New forms of laser-written storage on glass or ceramic platters are being developed, with the hope of providing faster archival data access than tape plus having an even longer life. We have Cerabyte with its ceramic platters, picking up an investment from Pure Storage, Optera with fluorescence-based storage and Folio Photonics. Tape still has capacity advances in the LTO roadmap but there is just one drive manufacturer, IBM, and, overall, tape is a complacent legacy technology that could be disrupted if one of these laser-written technologies succeeds.
SSD data access at the file level has been accelerated with Nvidia’s GPU Direct protocol, in which data is transferred by RDMA direct from a drive into a GPU server’s memory with no storage array controller or x86 server CPU involvement. Now the same technique is being used in GPUDirect for objects, and promises to make the data retained in object storage SSD-based systems available to GPUs for LLM (Large Language Model) training and inference. Suppliers like Cloudian, Scality and MinIO are pressing ahead with fast object data access as is VAST Data.
This will probably encourage some object storage systems to migrate to SSD storage and away from HDDs.
A new scale-out storage array architecture has been pioneered by VAST Data; Disaggregated Shared Everything (DASE), with separate controllers and storage nodes linked across an NVMe fabric ad all controllers able to see all the drives. HPE has its own take on DASE with its Alletra MP X10000. Quantum’s Myriad OS is comparable array software and NetApp has an internal ONTAP Data Platform for AI development.
Such systems scale out much more than clusters and can provide high levels of performance. With parallel NFS, for example, they can reach parallel file system performance speeds.
Another storage development is the spread of key-value storage technology to underly traditional storage protocols such as block, file, and object. HPE (X10000 software) and VAST Data are active in this area. The resulting systems can have, in theory, any data access protocol layered on top of a KV engine and provide faster protocol data access than by having one or more access protocols implemented on top of, for example, object storage. Ceph comes to mind here.
Public cloud block storage is getting accelerated by using ephemeral instances, with Silk and Volumez providing software to achieve this. Silk is the most mature and focussed solely on accelerating databases, such as Oracle, in the cloud. Volumez has lately implemented a strong focus on the generative AI use case. A third cloud block storage startup, Lucidity, aims to dynamically optimize cloud block storage and save costs.
GPU server access to stored data is being accelerated by moving the data from shared external storage to a GPU server’s local drives – so-called tier zero storage. Data from these drives can be loaded into the GPU server’s memory faster than from external storage arrays. This concept is being heavily promoted by Hammerspace and a Microsoft Azure AI supercomputer uses it as well, with checkpointing data going to these drives.
Hammerspace is an example of another storage industry development, data orchestration, in which data, file or objects, is made available to data center or remote sites or edge devices from within a global namespace with distributed metadata and some kind of caching playing a role to make distant data access seem local. Arcitecta is another supplier of data orchestration software.We expect data management suppliers, such as Komprise, and cloud file services players, derived from the old sync ’n share collaboration software technology, like CTERA, Nasuni and Panzura, to enter this field as well.
We are seeing the rise of vector database storage, holding multi-dimensional hashes – vectors – of unstructured data item, such as word fragments and words, parts of images, videos and audio records. Such vectors are used by LLMs in their semantic search activities and dedicated vector data base suppliers such as Pinecone and Zilliz have started up. They say they offer best of breed facilities whereas multi-data type database suppliers, such as SingleStore, are compromised because they can’t focus everything on optimising for vector access.
AI data pipeline technology is being developed to find, select, filter and feed data to LLMs for raining and inference. Many suppliers are developing this capability, including Komprise, Hammerspace and the vector database companies, both dedicated and multi-type.
AI, as we can see, is transforming the storage industry. It is affecting backup suppliers as they see their vast troves of backup data being being a great resource for an organization’s Retrieval-Augmented Generation (RAG) -influenced LLMs, aimed at making generally-trained LLMs applicable to proprietary data sources and less likely to produce inaccurate responses (hallucinate).
Backup of SaaS application data is being developed and promoted by Commvault, HYCU, Rubrik, Salesforce, Veeam and many others others and will, we think, grow and grow and grow.
Two other developments that are ongoing, but happening at a slower rate, are disaggregated or Web3 storage, in which an on-premises data center’s spare storage capacity is made available to a public cloud storage company which sells it at a substantially cheaper price than mainstream public cloud storage such as AWS S3 or Azure Blob. Suppliers such as Storj and Cubbit are active here.
Lastly, the data protection industry may have started showing signs of consolidation, with Cohesity buying Veritas. We stress the “may.”
The storage industry is multi-faceted and fast developing, because the amount of data to be stored is rising and rising, as are its costs, and the speed of access is a constant obstacle to getting processor work done. It is these three pressures that cause the constant frustration which drives businesses to improve and re-invent storage.
Newly public NAND and SSD maker Kioxia reported sales of ¥450 billion ($3 billion), down 31 percent year-on-year, for its Q3 ended December 31 2024.
These are the first published quarterly numbers since Kioxia’s IPO and Tokyo stock exchange listing in December last year. It also recorded a ¥76.1 billion ($510.7 million) IFRS profit, a turnaround since its year-ago ¥85.9 billion ($594 million) loss. These results were near the middle of Kioxia’s guidance range.
EVP and executive officer Tomoharu Watanabe said: “Overall we continue to see growth in our traditional markets such as smartphones and PCs, and the rapid proliferation of AI, which requires significant amounts of storage, presents an exciting driver for further growth.”
Cash and cash equivalents: ¥174.3 billion ($1.17 billion)
Kioxia splits its revenue sources three ways: SSD and Storage, Smart Devices and Others.
“Others” means retail products such as SD cards and USB sticks plus revenue from chip sales to Western Digital. “Smart Devices” refers to memory for smartphones, tablets, TVs, other consumer devices and the automotive NAND market. “SSD and Storage” refers to NAND and SSD sales to the PC, data center and enterprise markets. A Kioxia chart shows the recent segment revenue history:
There was a 53.5 percent year-over-year rise in Smart Device sales but a sequential 23 percent decline due to phone makers using up inventory rather than buying new NAND.
SSD and Storage revenues have been growing since last year, roughly doubling due to AI server demand and traditional server replacement. Datacenter and enterprise SSD sales accounted for approximately half of this segment’s revenues.
Kioxia’s estimates it market share for datacenter and enterprise SSDs (PICe) in calendar 2023 was about 10 percent. It’s still calculating its calendar 2024 share.
CFO Hideki Hanazawa said customer qualifications of Kioxia’s gen 8 BiCS flash across the three segments were on schedule and he expects a recovery in demand in the second half of calendar 2025.
Kioxia has a joint NAND fabrication partnership with Western Digital and WD’s SSD revenues for its latest quarter were $1.88 billion, up 12 percent year-over-year. Kioxia believes that, with the spin-off of WD’s flash business, SanDisk, “there will be no impact on the business operations or growth of our company and our joint ventures together.”
The outlook is for growth in smartphones, PCs, datacenters and enterprises as well as traditional server SSDs. It will expand BiCS gen 8 production to cater for AI application demand and a general recovery in the NAND sales pipeline in the second half of 2025. Kioxia expects a NAND bit growth rate in the low 10 percent range in calendar 2025 with fourth quarter revenue expected to be ¥330 billion ± ¥15 billion, a 2.5 percent increase on the year-ago Q4 at the mid-point.
Profits in Kioxia’s Q4 will likely be down due to pressure on prices, given the excess inventory currently floating around the industry.
Full FY 2024 revenues should be ¥1.69 trillion ± ¥30 billion ($11.3 billion), a 57 percent year-over-year increase at the midpoint, making it Kioxia’s “highest annual revenue ever.”
Looking ahead, Kioxia says it will focus on SSDs that support PCIe Gen 5. Its BiCS Gen 9 NAND, with 300-plus layers, will be announced later this month.
Bootnote
Unlike US businesses, Japanese company fiscal years are named from the starting calendar year, not the finishing calendar year. Kioxia’s fiscal 2024 year runs from April 2024 to the end of March 2025.
Micron claims its latest business client SSD, the PCIe gen 5 4600 – aimed at AI PCs, gamers and professional users – is twice as fast at reading as its 3500 predecessor.
These gumstick format (M.2) drives are for OEMs and business system resellers, in contrast to Micron’s Crucial consumer brand with its T700/705 and P510 PCIe gen 5 M.2 drives for enthusiasts and gamers. The T700 and T705 are built with Micron’s gen 8 232-layer TLC NAND while the P510 uses the newer gen 9 276-layer product, still in TLC format.
Back in business land, the 3500 is a 232-layer TLC product using the slower PCie gen 4 bus, half the speed of PCIe gen 5, with up to 7GBps sequential read and write bandwidth. Micron is updating it with the 4600 model, using denser TLC NAND, 276-layers and the gen 5 PCIe bus, like the P510, to deliver a drive providing up to 14.5 GBps read and 12 GBps write bandwidth. The available capacities are 512 GB, 1, 2 and 4 TB. It delivers 2,100,000 IOPS, both for random reads and writes and has low latency numbers; 50μs for reads and 12μs for writes
Prasad Alluri, VP and GM for Client Storage at Micron, claimed: “With the 4600 NVMe SSD, users can load large language models in less than one second, enabling PC experiences in data-intensive applications, especially for AI. As AI inference runs locally on the PC, the transition to Gen5 SSDs addresses the increased need for higher performance and energy efficiency.”
The performance in various application workloads averages more than half as much again as the 3500 product using the SPECwpc benchmark:
The power draw numbers are:
Sleep: <3.5mW
Active Idle: <150mW
Active Read: <8,500mW
There is a whole raft of security features: AES-256 encryption, TCG Opal and Pyrite, signed firmware and boot, Security Protocol and Data Model (SPDM), Data Object Exchange (DOE) and Device Identifier Composition Engine (DICE), all helping provide improved protection for user data.
Samsung and SK hynix make competing business-class M.2 format PCIe gen 5 drives. The Samsung PM9E1 offers up to 14.5 GBps sequential read and 13 GBps sequential write, making it slightly faster. The Platinum P51 from SK hynix , also aimed at the AI PC market, has similar performance; 14PBps when reading and 12 GBps when writing.
Micron may be developing a new SOCAMM (System On Chip Advanced Memory Module) memory format for AI PCs with SK hynix and NVIDIA, according to a report in Korea’s SE Daily. This NVIDIA-driven initiative will use LPDDR5X DRAM instead of SODIMM modules with DDR4 or DDR5 memory. The report says SOCAMM will be a detachable large fingernail-sized product, allowing for upgrades, and have 694 ports whereas an average PC DRAM module has c260. That means its data IO bandwidth will be much greater and support AI models running on the AI PC.
The Micron 4600 product brief can be accessed here.
Log data-focused AI security startup DeepTempo has hired its first sales VP some 16 months after being founded.
Chris Bowen
DeepTempo was set up by Evan Powell in November 2023 and emerged from stealth in November last year. It has hired Chris Bowen, who was SVP of sales at Hammerspace until the data orchestrator hired WEKA’s Jeff Gianetti as its first official CRO in January.
Powell has a long track record of working at storage industry startups, including MayaData (bought by DataCore), StackStorm (bought by Brocade), and Nexenta (bought by DDN).
“Our Tempo software sees attacks that other solutions miss while offering significant cost savings versus aging rules and ML-based software,” said Powell in a canned statement.
DeepTempo is developing Log Language Models (LLGMs), a type of large language model that inspects log data, recognizes attack incidents, and instead of sending raw log data, it forwards detected incidents to security information and event management (SIEM) resources. Its Tempo module runs as a native Snowflake data warehouse app and can detect attack indicators in the Snowflake environment.
Evan Powell
In essence, a DeepTempo LLGM should be able to run in any data lake or in front of any log data stream, recognize deviations from normal log data patterns that could indicate malware activities, and send incident alerts via a connector to a SIEM app. The DeepTempo app runs on-premises and is capable of running on a single CPU or GPU. We’re told it can scale horizontally in any Kubernetes-based workload management system. Tempo is currently available in the Snowflake NativeApp marketplace.
DeepTempo has completed a BNY Ascent Program engagement, whereby it worked with Bank of New York (BNY) engineers, executive teams, and clients on a proof-of-concept (PoC) validation. The PoC terms are based on input from BNY clients and the program provides access to a group of VCs, perhaps resulting in a BNY investment. DeepTempo’s funding history is unknown. However, a BNY Ascent Program requirement is that an A-round of funding has been completed.
By hiring Bowen, DeepTempo is signaling that it has product to sell and will probably aim its sales pitches at Snowflake’s channel. If this exercise is successful, Powell may well be looking at another of his startups being acquired.
NetApp has alliances with many data protection suppliers such as Cohesity-Veritas, Commvault, Rubrik, and Veeam. Yet it also has its own BlueXP Backup and Recovery offering. Is it competing with these businesses? How do the in-house NetApp and external partner data protection offerings relate to each other?
We talked to Gagan Gulati, NetApp’s SVP and General Manager for Data Services, to find out the basic data protection situation.
Blocks & Files: I wonder how NetApp positions BlueXP Backup and Recovery alongside data protection products from its partners.
Gagan Gulati
Gagan Gulati: With Cohesity we’ve worked in the past, even today with Veritas, we are actually integrating actively with Veeam right now with a couple of things, they want us to work on backup and Kubernetes. There is just so much we do with our partners and our customers want us to do that.
Generally speaking, there are a set of use cases that these partners do very well and there are use cases that they don’t do very well. I’ll give you a few examples. If I’m a customer and I want to take a backup and put it on S3 Glacier or on a tape every it can take eight hours, 16 hours, or more. I want to do this heterogeneously for all my storage, for my Office 365 use case, for my Salesforce use case. We want to help our partner do the best job, but we have a lot of use cases like accidental deletion of data or the dev test environment … where the RPO and RTO needs are something that a partner can’t meet. So I have a highly critical application. I have a highly critical workload and I want RPOs and RTOs that are in minutes, not hours or days.
At NetApp, we have built products over the last decade that help you do that. For example, SnapCenter or SNAP Manager, which is one of the crown jewels of the company, which helps our database owners, application owners take snapshots, take backups, within minutes and then recover within minutes while the partner’s products, because they’re designed for a heterogeneous environment, could take hours and sometimes days.
The Blue XP backup and recovery product as it stands today came into being from something very similar, which is customers demanding two or three things. One, an RPO and RTO that they can’t get from our partners, essentially from their main players. So it’s not that we are trying to go and replace a Commvault, for example, at a customer account. We can’t; they’re a heterogeneous player. We want to work with them.
But our customers are also pretty clear, whether they’re an EDA customer or M&E customer who use our backup and recovery service today, that for particular use case or these use cases that I’m running, when I’m making a 3, 2, 1 or a 3, 2 1, 1 backup, and I want to recover in a number of minutes, not hours, we want to use software that is designed for NetApp.
We want to use backup and recovery software designed for NetApp that gives them the best RPO and RTO. So that’s number one. That’s a huge use case.
Blocks & Files: And the second thing?
Gagan Gulati: Number two, what happens today is that, generally speaking, when you deploy any partner product, you need media servers, you need scanners to go in and scan. It’s an outside-in process. You’re cataloguing that way.
What happens when you use NetApp backup and recovery product? The TCO gets really low because of our storage efficiency, because of the way we incrementally backup. If you have volume, like 10, 20 petabytes of data for a workload that you want to back up for example, and then you want to recover quickly, you have to use something designed for that on NetApp storage. That’s where our [BlueXP] products shine.
By no means do we want to go and say … we are a heterogeneous backup player. We will never be. But we do focus on getting our customers the best of the ability for the specific use cases that we want to work on. So that’s the current state; that’s basically where NetApp [BlueXP] backup is [placed].
Blocks & Files: You are outlining a way in which customers can get the best of both worlds, I think.
Gagan Gulati: Correct. I think the only point I’ll make is that what we have seen it is that, within customers, we cater to personas, different personas. Commvault and Rubrik etc. go to market motions typically; their push is to the chief compliance officer and also to the head of IT, the CIO.
In our case, we start with the application owners and the database owners. Then, of course, it goes up to the storage admin and the VP of IT and the CIO because of the needs and the requirements that we fulfilled just being different. Or us being able to fulfill a requirement because, hey, ‘we can store your data in an extremely efficient way with the highest RPO and RTO. And we don’t require any local infrastructure’.
What happens if you are an Oracle owner or you are an SAP owner, you’re a SQL owner or you are running a company’s payment app and you have SLAs to maintain for RPO and RTO and you want it to be done in application-consistent ways? Then, unfortunately, a partner may not be able to help as much as you may think because their data movers are generally slow.
They copy file by file where we are making incremental SNAP snapshots and we lock them. So there are just certain set of advantages that we have in those use cases.
[With] incremental, you can make incremental snaps every minute if you want to, every second if you want to and not have to worry about eight hours later, ‘oh my God, I lost my data for eight hours. What should I do now?’ So from an application owner and a database owner point of view, or even from a typical EDA workload, those are the capabilities that when customers need them, they know they have to come to us. Those personas come work with us directly.
Comment
Gulati is claiming that when a backup/data protection scenario involves fast RPO/RTO numbers and applies within a homogeneous NetApp environment then BlueXP Backup and Recovery is the way to go. But when, on the other hand, the data protection scenario involves heterogeneous systems, and the RPO/RTO requirements are more relaxed, then a data protection third party such as Cohesity-Veritas, Commvault, Rubrik, or Veeam is a better choice.
Microsoft is offering more predictable billing costs on its Azure public cloud service for disk-based file storage similar to the provisioning style of its SSD file storage as an alternative to its pay-as-you-go system.
Azure has two file storage tiers: Premium, using fast SSD storage; and Standard – using hard disk drives (HDDs). Premium is billed with a Provisioned v1 model based on the capacity when a fileshare is created. A user selects the required capacity and Azure allocates IOPS and throughput (bandwidth) based on it. To get more IOPS and/or throughput, you must provision more capacity.
A pay-as-you-go model was the standard way to bill HDD-based file storage on capacity used, throughput, and data transfer cost. It has three access tiers: transaction-optimized for transaction-heavy workloads, hot access for balanced capacity and transaction needs, and cool access for capacity-centric workloads. Azure monitored this scheme at the storage account level, not the fileshare level, making fileshare costs difficult to ascertain.
Vybava Ramadoss
This model provides a high degree of unpredictability in the actual costs incurred. Azure blogger Vybava Ramadoss, Azure Storage Principal Product Lead, says: “Usage-based pricing can be incredibly challenging to understand and use because it’s very difficult or impossible to accurately predict the usage on a file share.”
Microsoft has now made a Provisioned v2 pricing model available for the Standard (HDD-based) file storage tier. Unlike the Provisioned v1 model, users can provision capacity, IOPS, and throughput separately for a fileshare. Azure will, however, “recommend IOPS and throughput provisioning to you based on the amount of provisioned storage you select.”
Provisioned v2 fileshares can span 32 GiB to 256 TiB in size, with up to 50,000 IOPS and 5 GiB/sec throughput, and users can dynamically scale up or down their application’s performance as needed, without downtime.
Azure has increased Provisioned v2 file share characteristics compared to the pay-as-you-go scheme:
Azure monitors fileshare usage with this pricing model under five metrics:
Transactions by Max IOPS, which provides the maximum IOPS used over the indicated time granularity.
Bandwidth by Max MiB/sec, which provides the maximum throughput in MiB/sec used over the indicated time granularity.
File Share Provisioned IOPS, which tracks the provisioned IOPS of the share on an hourly basis.
File Share Provisioned Bandwidth MiB/s, which tracks the provisioned throughput of the share on an hourly basis.
Burst Credits for IOPS, which helps you track your IOPS usage against bursting.
The Provisioned v2 pricing model is now available in 24 Azure regions in North and South America, Europe, and the Asia-Pacific area.
In comparison, Azure NetApp Files has Standard, Premium, and Ultra storage tiers that are based on 1 TiB increments. These are priced differently for single encryption, double encryption, and cool tier access, and are basically much more expensive on a capacity basis.
Azure Native Qumulo instances come in hot or cold tiers with prices starting at $3,700/month for hot and $2,500/month for cold tiers. Usage-based pricing is applied using capacity, throughput, and IOPS metrics. A table compares basic capacity/month prices, normalized to per-GiB numbers, for Azure Files, NetApp, and Qumulo file services in Azure:
This table only provides basic capacity prices. A full pricing comparison for the various file storage services will need to take into account throughput and IOPS charges plus whatever ancillary charges might apply, such as snapshot costs.
In contrast, Dell APEX File Storage for Azure is priced on annual licensed capacity in TB: $642.81 for 12 months and $1,578.8 for 36 months. Dell says the cost of running the product is a combination of a software plan charge plus Azure infrastructure costs for the virtual machines on which the Dell APEX File Storage for Azure software runs.
Currently, there is only one software plan, the Cluster Deployment Plan, and it is free. There are no details available on pricing by VM instance.
StorPool is developing a Disaster Recovery Engine (DRE) for Linux-based virtual machines based on site-to-site replication between its storage arrays.
A Linux KVM (kernel-based virtual machine) system has, like VMware, a hypervisor running virtual machines. The VMs are loaded from a storage system and, as they operate, make changes to their stored data. When StorPool provides the external storage array, these changes in the primary system can be replicated to a distant, secondary StorPool system. Should the primary system fail for any reason, the VMs can be restarted at the secondary site and use the replicated data.
StorPool says this type of VM-based data replication for disaster recovery is common in the VMware environment, blogging: “There is no KVM-equivalent for their disaster recovery, VMware Live Site Recovery (VLSR), formerly known as VMware Site Recovery Manager (SRM).” There is no such cross-site intelligence in the KVM area, meaning that any DR arrangement has to be provided by a third party or self-created and managed. The first alternative can be costly and the second difficult and needing ongoing maintenance.
The StorPool blog says: “With StorPool Disaster Recovery engine – a built-in component of the StorPool Storage solution – organizations can configure policies for data replication to remote sites and automate failover (and failback) between sites.”
StorPool comes as a Fully Managed Service/Storage-as-a-Service (STaaS) offering and DRE is integrated with StorPool’s existing licensing. StorPool clusters and systems at multiple sites can be managed from a single console, through cloud orchestration tools such as CloudStack, OpenNebula, OpenStack, or Proxmox. The company says many of its service provider customers build their own management tools using StorPool’s RESTful APIs.
StorPool DR configuration alternatives
The replication be active-passive and 1-to-1, with an active primary site replicating data and VM recovery points to a passive secondary site. Failover is automated but not automatic; a person has to initiate the switchover.
There is a 1-to-1, bi-directional, active-active alternative between two datacenters, with each site replicating to the other. Should one site fail, the remaining single site starts the remote site’s replicated VMs and runs both workloads.
A less expensive many-to-1 DR option is to have several active primary sites replicating to a single secondary site, probably in a different geographic region. The thinking is that a disaster would likely strike one site only and the risk of two or more sites being simultaneously hit is extremely unlikely.
Imtiaz Khan
A further alternative is to have a multi-site, active-active system have each site replicating some of its VMs to one target site and others to a different site in a multi-site mesh. The blog says this allows load-balancing across multiple sites. ”You can think of it as a combination of multiple pairs of bi-directional sites configured into a larger solution.” StorPool suggests: “This model can be ideal for customers running and supporting multi-region clouds.”
RapidCompute is a web hosting business in Karachi, Pakistan. CTO Imtiaz Khan stated: “StorPool’s Disaster Recovery (DR) Engine has become a pivotal component of RapidCompute’s offerings, enabling us to integrate advanced DRaaS capabilities into our KVM-based OpenStack cloud platform.” It can “consistently meet stringent RPO and RTO objectives.”
StorPool’s Disaster Recovery Engine can protect environments with tens to thousands of VMs. It is now in public beta, with general availability scheduled for Q2 2025.
Aerospike unveiled Database 8, a major upgrade of its flagship multi-model distributed database. V8 adds distributed ACID transactions to support large-scale online transaction processing (OLTP) applications. Aerospike says v 8 is the first real-time distributed database to guarantee strict serializability of ACID transactions with efficiency, claiming it’s a fraction of the cost of other systems.
…
Data mover Airbyte is offering customers predictable pricing based on capacity – rather than data volumes – to accommodate their need for artificial intelligence (AI), data lakes, and real-time analytics. A pilot rollout with customers over the past few months has met with overwhelmingly positive responses, plus the company gathered feedback from more than 500 organizations that cited issues with unpredictability and cost spikes with traditional volume pricing models. The new pricing applies to Airbyte Teams and Enterprise products with pricing determined by the number of connections/data sources, frequency of data refreshes (daily, hourly, real-time), and the pipeline scheduling requirements.
For Airbyte Cloud, there are no changes because pay-as-you-go and credit-based pricing can work well for specific customers, especially smaller organizations with fewer data sources and more predictable data needs that benefit from not having to build and maintain that infrastructure themselves. More details here.
…
Ataccama, an enterprise data quality, data management, and data governance player, has launched Ataccama Lineage, a new module within Ataccama ONE, its flagship Ataccama unified data trust platform. It says Lineage provides enterprise-wide visibility into data flows, offering organizations a view of their data’s journey from source to consumption. It helps teams trace data origins, resolve issue, and ensure compliance. It’s integrated with Ataccama’s data quality, observability, governance, and master data management capabilities, and enables organizations to make faster, more informed decisions, such as ensuring audit readiness and meeting regulatory compliance requirements.
…
Cobalt Iron says its Compass enterprise SaaS backup platform has been recognized as a TOP 5 Enterprise VMware Backup Solution for 2025-26 by DCIG. Download the 2025-26 DCIG TOP 5 Enterprise VMWare Backup report here
…
Commvault announced that the Commvault Cloud Platform can be deployed from the AWS, Azure, GCP and VMware marketplaces utilizing CIS-hardened images. These are SW files that are pre-configured to align with the Center for Internet Security (CIS) Benchmarks to meet security benchmarks out-of-the-box.
…
Data streamer Confluent and data lakehouser Databricks have a partnership and added new bi-directional integrations between Confluent’s Tableflow, Deltalake and Databricks Unity Catalog to provide customers with real-time data for AI-driven decision-making. Tableflow with Delta Lake makes operational data available immediately to Delta Lake’s ecosystem. Confluent and Databricks customers will be able to bring any engine or AI tool such as Apache Spark, Trino, Polars, DuckDB and Daft to their data in Unity Catalog. Operational data from Confluent becomes a first-class citizen in Databricks, and Databricks data is accessible by any processor in the enterprise.
…
Databricks has launched SAP Databricks, a strategic product and go-to-market partnership with SAP that natively integrates the Databricks Data Intelligence Platform within the newly launched SAP Business Data Cloud. The partnership combines business data in SAP with the Databricks platform for data warehousing, data engineering, and AI all governed by Databricks Unity Catalog. SAP Databricks allows customers to combine their SAP data with the rest of their enterprise data. Through bi-directional sharing of data via Delta Sharing between their SAP Databricks environment and their native Databricks (non-SAP) environment, they can unify all their data without complicated data engineering.
SAP Databricks is sold by SAP as part of SAP Business Data Cloud, and will be available in a staged rollout on AWS, Azure and Google Cloud.
…
As part of its big Infinia object storage for AI launch, and following its $300 million Blackstone investment, DDN is hiring execs. Moiz Kohari is DDN’s VP of Enterprise AI and Augmented insights. Doug Cook is VP of Global System Integrators. Santosh Erram is VP of Strategic Partnerships and Business Development, working with NVIDIA, hyperscalers and AI.
…
Accelerated Compute Fabric (ACF) switch chip developer Enfabrica the opening of Enfabrica India, a Hyderabad-based office and R&D center to grow the company’s global footprint, attract the engineering and development talent, and scale its silicon and software product development operations. The AI market in India is projected to reach $8 billion by 2025, growing at a compound annual growth rate of over 40 percent rom 2020 to 2025. Enfabrica is building its team amongst Hyderabad’s vibrant academic community of talented engineers and developers.
…
Hemanth Vedagarbha.
AI-focused data warehouser Firebolt has hired Hemanth Vedagarbha has its first-ever president. He will oversee all go-to-market (GTM) and customer-facing functions – including Sales, Customer Success, Marketing, Business Development, Field Engineering, Technical Support, Revenue Operations and Partnerships & Ecosystems. He brings over 20 years of “successful, high-growth enterprise SaaS experience at industry-leading companies such as Oracle and most recently Confluent.”
…
Data mover Fivetran has appointed Simon Quinton as GM for EMEA, Suresh Seshadri as CFO and Anand Mehta as Chief People Officer (CPO).
…
Forrester has issued a Forrester Wave: Translytical Data Platforms, Q4 2024 report, with Oracle at the top of the translytical tree, followed by MongoDB, Google and InterSystems. SingleStore is the leading strong performer. Get a copy of the report here.
…
HarperDB says it collapses fragmented systems like MongoDB, Redis, Kafka and application servers into one, high-performant technology platform, removing layers of resource-consuming logic, serialization, and network processes between each technology in the stack (data, application, cache, and messaging). HarperDB users get a low-latency system with “limitless” horizontal scale. They could achieve 7x faster page loads and nearly 30x faster LCPs (time to largest contentful point), the company claims. Common backend processes take 100+ milliseconds on traditional systems, but take only .2-1 millisecond with HarperDB, it claims.
…
Hitachi Vantara is allying with BMC Software to combine its VSP One and Hitachi Content Platform (HCP) offerings storage platform’s with BMC’s AMI mainframe SW to help mainframe users reduce costs, optimize operations and secure mainframe data. The AMI products involved are AMI Cloud, AMI Security, AMI Ops and AMI devX.
…
Log data streaming data lake supplier Hydrolix has achieved the Amazon CloudFront Ready designation, part of the Amazon Web Services (AWS) Service Ready Program. Hydrolix integrates with AWS CloudFront, WAF and Elemental services. More edge services integrations are coming in the first half of 2025.
…
Hydrolix announced a new connector for Apache Spark to enable Databricks users to store massive amounts of time series data over long periods of time at full fidelity in the Hydrolix data lake. Using Hydrolix, Databricks users can now unleash the analytical power of Databricks against all of their data and model across longer time periods, such as year-over-year and multiyear data sets, to gain better insights.
…
Lenovo research finds that, In 2025, AI budgets are expected to nearly triple compared to the previous year, comprising nearly 20 percent of total IT budgets. 63 percent of organizations globally prefer using on-premises and/or hybrid infrastructure/ deployments for AI workloads. Data science, along with IT services and infrastructure, will be the top AI areas of investment over the next 12 months. 42 percent of organizations are expected to focus on implementing GenAI use cases, a significant increase from 11 percent in 2024. Fill out a form to download an eBook about this here.
…
Multi-cloud storage manager startup Lucidity announced a $21 million Series A investment led by WestBridge Capital with participation from existing investor Alpha Wave. Its cloud management SW automates block storage expansion and shrinking of storage volumes based on real-time data demands, helping the world’s largest enterprises cut costs by up to 70 percent. Its NoOps, autonomous, application-agnostic layer seamlessly integrates with existing applications and environments – without requiring any code to be changed. Since its founding in 2021, Lucidity says it has achieved 400 percent year-over-year growth, although as a private outfit, it does not provide base figures.
Nitin Bhadauria, Cofounder at Lucidity said. “Lucidity delivers the only platform for ITOps and DevOps organisations to automatically manage and optimize their block storage in real-time across all three major cloud providers while significantly reducing costs.”
…
Open-source, in-memory graph database supplier Memgraph has v3.0 SW which it says will enable firms to make their data GenAI-ready and create applications, such as chatbots or agents, that are more-performant. It integrates vector search to combine the creative power of LLMs with the precision of knowledge graphs. GenAI applications powered by 3.0’s standout feature, Retrieval-Augmented Generation in graph (or GraphRAG), enhance reasoning, reduce hallucinations, and work securely within an enterprise’s unique context and data. It has unique dynamic algorithms, tailored for real-time data analysis and high-throughput use cases which enable fast, continuous responses to incoming data without requiring LLM retraining. Users can quickly create knowledge graphs that enhance LLMs, while preventing the accidental exposure of proprietary information, safeguarding an organization’s IP.
…
Cloud file services supplier Nasuni says in 2024 it grew its revenue by 26 percent and stayed profitable and cash flow positive. It now manages over 500 petabytes of total capacity. More than half of its customer base also expanded their deployments, with over 600 expansions recorded throughout the year. Nasuni customers include Mattel, Autodesk, Tetra Tech, Dow, Dyson, Boston Scientific, and the State of Arizona. It increased its global workforce to just under 600, with many employees based out of its Boston headquarters and in the United Kingdom, Ireland, India, and other locations.
…
Nutanixreleased findings for its 7th annual Enterprise Cloud Index, saying 80 percent of organizations have already implemented a GenAI strategy. It added that 98 percent of respondents face challenges when it comes to scaling GenAI workloads from development to production. The #1 challenge associated with this is integration with existing IT infrastructure. Have a look at the report here.
…
Back in October 2024 Overland Tandberg said it was exiting the tape archive data protection business to focus on its RDX removable disk drive offerings. Tape storage hardware and services supplier MagStor has just announced availability of Transition and Support Services specifically for Overland Tandberg customers and resellers. MagStor sells tape hardware that is compatible with Overland Tandberg tape hardware products. Overland customers and resellers can contact MagStor for pricing on solutions to replace their Overland branded products. MagStor MSRP pricing is always lower than the former Overland Tandberg products as well. More information here.
…
OWC (Other World Computing), which supplies high-performance storage, docks, and memory cards for video and audio production, photography, and business, and ARCHIWARE, a provider of data management SW, have a strategic partnership to deliver shared storage, cloning, backup, and archiving for collaborative workflows. The ARCHIWARE P5 platform will now be natively integrated with the OWC’s Jellyfish Shared Storage for Video Production to enhance collaboration capabilities, ensure data protection, and future-proof asset management.)
…
OWC announced GA of Dock Ejector 2.0, for efficiently and safely ejecting all connected devices, including SoftRAID and AppleRAID volumes. This updated version works with all docks, including non-OWC docks and hubs, expanding compatibility and drive protection to all Mac and PC users. By ensuring all data has been written before any disk is unmounted, you can safely eject your dock without worrying about losing or fragmenting files.
…
PNY Technologies announced a new office in Saudi Arabia to strengthen PNY’s local presence to support businesses in their AI and digital transformation projects.
…
Snowflake unveiled Cortex Agents, a fully managed service that simplifies integration, retrieval and processing of structured and unstructured data to enable Snowflake customers to build high-quality AI agents at scale. Cortex Agents, now available in public preview, orchestrates across structured and unstructured data sources, whether it be Snowflake tables or PDF files stored in object storage, and breaks down complex queries, retrieves relevant data and generates precise answers, using Cortex Search, Cortex Analyst and LLMs. Cortex Analyst is now generally available with Anthropic Claude as a key LLM powering agentic text-to-SQL for high-quality structured data retrieval. Cortex Search has achieved state-of-the-art quality unstructured data retrieval accuracy, beating OpenAI embedding models by at least 12 percent across a diverse set of benchmarks including (NDCG@10).
…
Virtualized datacenter supplier VergeIO had a great final 2024 quarter, booking 100 percent more new annual contract value (ACV) than the prior quarter and 12 percent more than VergeIO’s previous record set in Q1 2024. New customers won nearly doubled, setting a new record. A global hyperscaler is funding a POC showing how VergeOS can virtualize NVIDIA’s latest GPU to be used remotely on another host for AI workloads. It anticipates winning additional new enterprise accounts with the launch of VergeOS on a global data center provider’s bare-metal offerings.
Florida startup Lonestar Data Holdings wants to base disaster recovery (DR) services on the Moon and is testing a small server hardware setup on a forthcoming Intuitive Machines IM-2 commercial lunar landing mission.
Chris Stott
The first Intuitive Machines lunar landing attempt with its IM-1 spacecraft was partially successful as the vehicle came to rest on its side, restricting its capabilities. But it did succeed in transferring data to and from the lunar surface. The IM-2 mission has an Athena landing vehicle and this will contain a Freedom IT unit, which Lonestar describes as “the first datacenter off planet and the first to the Moon.” As the “datacenter” actually consists of a microchip RISC-V processor running Ubuntu Linux with a Phison SSD, this might be considered overstated, but it’s a start. The IM-2 vehicle is set for launch atop a SpaceX rocket in a multi-day launch window opening on February 26.
Florida-based Lonestar was founded by Chris Stott, chairman and CEO, in 2018 to provide data services such as DR and Resilience-as-a-Service from the Moon. The thinking is that the Moon is immune to natural disasters that affect the Earth and so provides a safer data haven. Stott says on LinkedIn that he has been involved the space, satellite, and telecommunications area. Lonestar’s website, no stranger to enthusiasm, talks about launching a series of increasingly capable multi-petabyte data storage spacecraft to orbit the Moon.
Lonestar image of IM-2 Athena lander
The Freedom IT unit has a 3D-printed casing designed by BiG, an architecture and design group led by Danish architect Bjarke Ingels. The exterior is said to “reflect the silhouettes of NASA astronauts Charlie Duke (Apollo Moonwalker) and Nicole Stott (Space Station Space Walker).” It is somewhat unexpected that a small computer system in a lunar landing vehicle would have resources devoted to a casing that, once enclosed in the Athena vehicle and loaded into the SpaceX launch rocket, will never be seen again. Presumably Lonestar wants to capture people’s imagination with the idea.
Freedom unit inside the Athena vehicle
It says the Freedom payload hosts a number of storage and edge processing customers, without providing details, and the capacity on this mission “is already sold out.”
Phison has previously had an 8 TB M.2 2280 SSD gain NASA Technology Readiness Level 6 (TRL-6) certification. It is partnering with SpaceBilt to send the world’s first 100-plus TB data storage and edge compute system, the Large in Space Server (LiSS) LiSS, to the International Space Station (ISS) later this year.
Comment
The concept of a lunar datacenter providing global DR services to organizations on Earth has merit as a way of avoiding terrestrial disasters, but there are several “buts.”
The datacenter’s physical infrastructure has to be transported to the Moon and installed and maintained there. However much Earth-to-Moon cargo carrying fees have declined, they – and the on-Moon construction and installation expenses – are likely still much more costly than building their equivalent on Earth.
Even if the Moon offers a safer environment than Earth, it has its own difficulties such as a very low temperature and no atmospheric defense against solar radiation and meteorites. By placing DR sites in remote underground coal mines or distributed sites across the Earth’s surface, the terrestrial risks can be mitigated, possibly rendering lunar DR unnecessary.
Western Digital revealed its HAMR disk technology migration plans in an Investor Day session and already has HAMR drives undergoing qualification by customers.
The Investor Day session was held to reassure investors ahead of the spin-off of the Sandisk NAND and SSD business unit expected by the end of this month. The spin-off is due to the WD stock price not reflecting the actual value of Sandisk, with the hope that an independent Sandisk will have a multi-billion dollar valuation, returning gains to current WD shareholders who will be given 0.33333 shares of Sandisk stock for each WD share they own.
The Sandisk-less Western Digital will be a hard disk drive manufacturer led by CEO Irving Tan. He and his team told investors that there will be 3x growth in generated data to 394ZB from 2023 to 2028. Over half, 59 percent, will be in enterprise data centers and the public cloud, 15 percent in edge devices (in-store controllers, substations, branch offices, etc.) and 26 percent in endpoint devices such as PCs and laptops.
Core and Edge data centers will hold 3,043 EB in 2028, with nearline HDDS accounting for the bulk of it, as a chart indicates.
This will be because, WD says, enterprise SSDs will continue to cost 6x more per TB than nearline disk drives, and the acquisition cost is the majority of total lifetime ownership costs for SSDs and HDDs:
It says SSDs have a 3.6x greater TCO than disk drives taking acquisition, power draw and other costs into account.
WD will ensure it plays a strong role in supplying HDDs by producing higher-capacity drives with a migration to HAMR technology. It currently ships 26TB conventional and 32TB shingled drives using ePMR recording technology with microwaves helping legacy perpendicular magnetic recording (PMR) create smaller bit areas.
But it needs 11 platters to achieve this whereas Seagate has 32TB, 10-platter shingled HAMR drives nearing general availability with 36TB in development and 40TB drives slated for later this year. This gives it an areal density and cost of goods advantage.
Consequently WD has revealed its HDD roadmap out to 2030 and beyond, with 28TB conventional and 36TB shingled drives using ePMR tech later this year. HAMR drives will appear in 2026 with 36TB conventional and 44TB TB shingled capacities in 2026. WD said it is planning to enter HAMR mass production in late 2026/2027.
It is promising 80TB conventional and 100TB shingled drives in the 2030+ timeframe with greater than 100TB drives after that, built with HDMR (Heated Dot Magnetic Recording).
It says the majority of its capital expenditure and budget is allocated to HAMR drive development and has begun hyperscaler testing of HAMR drives. Seagate found such testing took many years. We’ll see if WD can avoid that testing trap. About this Wedbush analyst Matt Bryson commented: “Getting HAMR right on the first try seems ambitious.”
You can check out WD’s Investor Day presentation here.
Bootnote
HDMR is based on ultra small, non-interacting, and thermally stable (at room temperature) magnetic dots. They are heated by laser, as with HAMR, to lower their resistance to magnetic polarity changes (coercivity) and so enable bit value writing.
Bit-patterned media. Intevac image.
HDMR can be viewed as a combination of HAMR and bit-patterned media (BPM) with one grain used per bit. With BPM there is an array of lithographically defined magnetic islands as opposed to PMR and HAMR which have a recording medium composed of a dense collection of random grains about 10nm in diameter. A recording bit is an area, hundreds of nanometers in size, a magnetic domain, containing many such grains, each of which have the same magnetic polarity.