Home Blog Page 11

Storage news ticker – May 16

Screenshot

Open source ETL connector biz Airbyte announced progress in the first quarter with revenues up 25 percent, industry recognition, new product features, and Hackathon results. Michel Tricot, co-founder and CEO, Airbyte, said: “This year is off to a great start on every level – customer acceptance and revenue growth, industry recognition, and product development as we continue to evolve our platform to offer the very best technology to our users. There is much more to do and we’re working hard on the next wave of technology that includes the ability to move data across on-premises, cloud, and multiple regions which is all managed under one control plane.” With over 900 contributors and a community of more than 230,000 members, Airbyte supports the largest data engineering community and is the industry’s only open data movement platform.

The UK’s Archive360 says it has released the first modern archive platform that provides governed data for AI and analytics. It’s a secure, scalable data platform that ingests data from all enterprise applications, modern communications, and legacy ERP into a data-agnostic, compliant active archive that feeds AI and analytics. It is deployed as a cloud-native, class-based architecture. It provides each customer with a dedicated SaaS environment to enable them to completely segregate data and retain administrative access, entitlements, and the ability to integrate into their security protocols. 

Support for enterprise databases including SAP, Oracle, and SQL Server enables streamlined ingestion and governance of structured data alongside unstructured content to provide a unified view across the organization’s data landscape. There are built-in connectors to leading analytics and AI platforms such as Snowflake, Power BI, and OpenAI. Find out more here.

Box CEO Aaron Levie posted an X tweet: “Box announced … new capabilities to support AI Agents that can do Deep Research, Search, and enhanced Data Extraction on your enterprise content, securely, in Box. And all with a focus on openness and interoperability. Imagine being able to have AI Agents that can comb through any amount of your unstructured data – contracts, research documents, marketing assets, film scripts, financial documents, invoices, and more – to produce insights or automate work. Box AI Agents will enable enterprises to automate a due diligence process on hundreds or thousands of documents in an M&A transaction, correlate customer trends amongst customer surveys and product research data, or analyze life sciences and medical research documents to generate reports on new drug discovery and development.”

“AI Agents from Box could work across an enterprise’s entire AI stack, like Salesforce Agentforce, Google Agentspace, ServiceNow AI Agent Fabric, IBM watsonx, Microsoft Copilot, or eventually ChatGPT, Grok, Perplexity, Claude, and any other product that leverages MCP or the A2A protocol. So instead of moving your data around between each platform, you can just work where you want and have the agents coordinate together in the background to get the data you need. This is the future of software in an era of AI. These new Box AI Agent capabilities will be rolling out in the coming weeks and months to select design partner customers, and then expand to be generally available from there.”

Datalaker Databricks intends to acquire serverless Postgres business Neon to strengthen Databricks’ ability to serve agentic and AI-native app developers. Recent internal telemetry showed that over 80 percent of the databases provisioned on Neon were created automatically by AI agents rather than by humans. Ali Ghodsi, Co-Founder and CEO at Databricks, said: “By bringing Neon into Databricks, we’re giving developers a serverless Postgres that can keep up with agentic speed, pay-as-you-go economics and the openness of the Postgres community.” 

Databricks says the integration of Neon’s serverless Postgres architecture with its Data Intelligence Platform will help developers and enterprise teams build and deploy AI agent systems. This approach will prevent performance bottlenecks from thousands of concurrent agents and simplify infrastructure while reducing costs. Databricks will continue developing Neon’s database and developer experience. Neon’s team is expected to join Databricks after the transaction closes.

Neon was founded in 2021 and has raised $130 million in seed and VC funding with Databricks investing in its 2023 B-round. The purchase price is thought to be $1 billion. Databricks itself, which has now acquired 13 other businesses, has raised more than $19 billion; there was a $10 billion J-round last year and $5 billion debt financing earlier this year.

Databricks is launching Data Intelligence for Marketing, a unified data and AI foundation already used by global brands like PetSmart, HP, and Skechers to run hyper-personalized campaigns. The platform gives marketers self-serve access to real-time insights (no engineers required) and sits beneath the rest of the martech stack to unlock the value of tools like Adobe, Salesforce and Braze. Early results include a 324 percent jump in CTRs and 28 percent boost in return on ad spend at Skechers, and over four billion personalized emails powered annually for PetSmart. Databricks aims to become the underlying data layer for modern marketing and bring AI closer to the people who actually run campaigns.

Data integration supplier Fivetran released its 2025 AI & Data Readiness Research Report. It says AI plans look good on paper but fail in practice, AI delays are draining revenue, and centralization lays the foundation, but it can’t solve the pipeline problem on its own – data readiness is what unlocks AI’s full potential. Get the report here.

Monica Ohara

Fivetran announced Monica Ohara as its new Chief Marketing Officer. Ohara brings a wealth of expertise, having served as VP of Global Marketing at Shopify and led rider and driver hyper-growth through the Lyft IPO.

In-memory computing supplier GridGain released its GridGain Platform 9.1 software with enhanced support for real-time hybrid analytical and transactional processing. The software combines enhanced column-based processing (OLAP) and row-based processing (OLTP) in a unified architecture. It says v9.1 is able to execute analytical workloads simultaneously with ultra-low latency transactional processing, as well as feature extraction and the generation of vector embeddings from transactions, to optimize AI and retrieval-augmented generation (RAG) applications in real time. v9.1 has scalable full ACID compliance plus configurable strict consistency. The platform serves as a feature or vector store, enabling real-time feature extraction or vector embedding generation from streaming or transactional data. It can act as a predictions cache, serving pre-computed predictions or running predictive models on demand, and integrates with open source tools and libraries, such as LangChain and Langflow, as well as commonly used language models. Read more in a blog.

A company called iManage provides cloud-enabled document and email management for professionals in legal, accounting, and financial services. It has announced the early access availability of HYCU R-Cloud for iManage Cloud, an enterprise-grade backup and recovery solution purpose-built for iManage Cloud customers. It allows customers to maintain secure, off-site backups of their iManage Cloud data in customer-owned and managed storage, supporting internal policies and regional requirements for data handling and disaster recovery, and is available on the HYCU marketplace. Interested organizations should contact their iManage rep to learn more or apply for participation in the Early Access Program.

We have been alerted to a 39-slide IBM Storage Ceph Object presentation describing the software and discussing new features in v8. Download it here.

IBM Storage Scale now has a native REST API for its remote and secure admin. You can manage the Storage Scale cluster through a new daemon that runs on each node. This feature replaces the administrative operations that were previously done with mm-commands. It also eliminates several limitations of the mm-command design, including the dependency on SSH, the requirement for privileged users, and the need to issue commands locally on the node where IBM Storage Scale runs. The native REST API includes RBAC, allowing the security administrator to grant granular permissions for a user. Find out more here.

Data integration and management supplier Informatica announced expanded partnerships with Salesforce, Nvidia, Oracle, Microsoft, AWS, and Databricks at Informatica World: 

  • Microsoft: New strategic agreement continuing innovation and customer adoption on the Microsoft Azure cloud platform, and deeper product integrations. 
  • AWS: Informatica achieved AWS GenAI Competency, along with new product capabilities including AI agents with Amazon Bedrock, SQL ELT for Amazon Redshift and new connector for Amazon SageMaker Lakehouse.
  • Oracle: Informatica’s MDM SaaS solution will be available on OracleCloud Infrastructure, enabling customers to use Informatica MDM natively in the OCI environment.
  • Databricks: Expanded partnership to help customers migrate to its AI-powered cloud data management platform to provide a complete data foundation for future analytics and AI workloads.
  • Salesforce: Salesforce will integrate its Agentforce Platform with Informatica’s Intelligent Data Management Cloud, including its CLAIRE MDM agent. 
  • Nvidia: Integration of Informatica’s Intelligent Data Management Cloud platform with Nvidia AI Enterprise. Nvidia AI Enterprise will deliver a seamless pathway for building production-grade AI agents leveraging Nvidia’s extensive optimized and industry-specific inferencing model. 

Informatica is building on its AI capabilities (CLAIRE GPT, CLAIRE Copilot), with its latest agentic offerings, CLAIRE Agents and AI Agent Engineering.

Kingston has announced its FURY Renegade G5, PCIe Gen 5×4, NVMe, M.2 2280 SSD with speeds up to 14.8/14.0 GBps sequential read/write bandwidth and 2.2 million random read and write IOPS for gaming and users needing high performance. It uses TLC 3D NAND. It has a Silicon Motion SM2508 controller based on 6nm lithography and low-power DDR4 DRAM cache. The drive is available in 1,024 GB, 2,048 GB and 4,096 GB capacities, backed by a limited five-year warranty and free technical support. The endurance is 1PB per TB of capacity. 

Micron announced that its LPDDR5X memory and UFS 4.0 storage are used in Motorola’s latest flip phone, the Razr 60 Ultra, which features Moto AI “delivering unparalleled performance and efficiency.” Micron’s LPDDR5X memory, based on its 1-beta process, delivers speeds up to 9.6 Gbps, 10 percent faster than the previous generation. LPDDR5X offers 25 percent greater power savings to fuel AI-intensive applications and extend battery life – a key feature for the Motorola Razr 60 Ultra with over 36 hours of power. Micron UFS 4.0 storage provides plenty of storage for flagship smartphones to store the large data sets for AI applications directly on the device, instead of the cloud, enhancing privacy and security. 

… 

Data protector NAKIVO achieved a 14 percent year-on-year increase in overall revenue, grew its Managed Service Provider (MSP) partner network by 31 percent, and expanded its global customer base by 10 percent in its Q1 2025 period. NAKIVO’s Managed Service Provider Programme empowers MSPs, cloud providers, and hosting companies to offer services such as Backup as a Service (BaaS), Replication as a Service (RaaS), and Disaster Recovery as a Service (DRaaS). NAKIVO now supports over 16,000 active customers in 185 countries. The customer base grew 13 percent in the Asia-Pacific region, 10 percent in EMEA, and 8 percent in the Americas. New NAKIVO Backup & Replication deployment grew by 8 percent in Q1 2025 vs Q1 2024.

Dedicated Ootbi Veeam backup target appliance builder Object First announced a 167 percent year-over-year increase in bookings for Q1 2025. Its frenetic growth is slowing as it registered 294 percent in Q4 2024, 347 percent in Q3 2024, 600 percent in Q2 2024 and 822 percent in Q1 2024. In 2025’s Q1 there was year-over-year bookings growth of 92 percent in the U.S. and Canada, and 835 percent in EMEA, plus growth of 67 percent in transacting partners and 185 percent in transacting customers.

Europe’s OVHcloud was named the 2025 Global Service Provider of the Year at Nutanix‘s annual .NEXT conference in Washington, D.C. It was recognized for its all-in-one and scalable Nutanix on OVHcloud hyper-converged infrastructure (HCI), which helps companies launch and scale Nutanix Cloud Platform (NCP) software licenses on dedicated, Nutanix-qualified OVHcloud Hosted Private Cloud infrastructure.

Objective Analysis consultant Jim Handy has written about Phison’s aiDAPTIV+SSD hardware and software. He says Phison’s design uses specialized SSDs to reduce the amount of HBM DRAM required in an LLM training system. If a GPU is limited by its memory size, it can provide greatly improved performance if data from a terabyte SSD is effectively used to swap data into and out of the GPU’s HBM. Handy notes: “This is the basic principle behind any conventional processor’s virtual memory system as well as all of its caches, whether the blazing fast SRAM caches in the processor chip itself, or much slower SSD caches in front of an HDD. In all of these implementations, a slower large memory or storage device holds a large quantity of data that is automatically swapped into and out of a smaller, faster memory or storage device to achieve nearly the same performance as the system would get if the faster device were as large as the slower device.” Read his article here.

Private cloud supplier Platform9 has written an open letter to VMware customers offering itself as a target for switchers. It includes this text: “Broadcom assured everyone that they could continue to rely on their existing perpetual licenses. Broadcom’s message a year ago assured us all that ‘nothing about the transition to subscription pricing affects our customers’ ability to use their existing perpetual licenses.’ This past week, that promise was broken. Many of you have reported receiving cease-and-desist orders from Broadcom regarding your use of perpetual VMware licenses.” Read the letter here.

Storage array supplier StorONE announced the successful deployment of its ONE Enterprise Storage Platform at NCS Credit, the leading provider of notice, mechanic’s lien, UCC filing, and secured collection services in the US and Canada. StorONE is delivering performance, cost efficiency, and scalability to NCS to help them in securing the company’s receivables and reducing its risk. Its technology allowed NCS to efficiently manage workloads across 100 VMs, databases, and additional applications, with precise workload segregation and performance control.

Western Digital’s board has authorized a new $2 billion share repurchase program effective immediately.

Western Digital has moved its 26 TB Ultrastar DC H590 11-platter, 7,200 rpm, ePMR HDD technology, announced in October last year, into its Red Pro NAS and Purple Pro surveillance disk product lines. The Red Pro disk is designed for 24/7 multi-user environments, has an up to 272 MBps transfer speed, rated for up to 550 TB/year workloads and has a 2.5 million hours MTBF. The Purple Pro 26 TB drive is for 24×7 video recording, the same workload rating, and has AllFrameAI technology that supports up to 64 single-stream HD cameras or 32 AI streams for deep learning analytics. The 26 TB Red Pro and Purple Pro drives cost $569.99 (MSRP).

Kioxia teases high-speed SSD aimed at AI workloads

Kioxia has announced a prototype CM9 SSD using its latest 3D NAND with 3.4 million random read IOPS through its PCIe Gen 5 interface, faster than equivalent drives from competitors.

The CM9 is a dual-port drive that will come in 1 drive write per day (DWPD) read-intensive and 3 DWPD mixed-use variants in U.2 (2.5-inch) and E3.S formats. The capacities will be up to 30.72 TB in the E3.S case and 61.44 TB in the U.2 format with Kioxia using its BiCS8 218-layer NAND in TLC (3 bits/cell) form. It should deliver 3.4 million/800,000 random read/write IOPS with sequential read and write bandwidths of 14.8 GBps and 11 GBps respectively.

Neville Ichhaporia, Kioxia
Neville Ichhaporia

Kioxia America SSD business unit VP and GM Neville Ichhaporia stated: “As AI models grow in complexity and scale, the need for storage solutions that can sustain high throughput and low latency, and allow for better thermal efficiency, becomes critical. With the addition of the CM9 Series, KIOXIA is enabling more efficient scaling of AI operations while helping to reduce power consumption and total cost of ownership across data center environments.”

The drive is built with CBA (CMOS directly Bonded to Array) technology, where CMOS refers to the chip’s logic layer. The CMOS and NAND array wafer components are fabbed separately and subsequently bonded together using copper pads. Kioxia says this technology can reduce chip area and improve performance.

Kioxia says the CM9 will provide performance improvements of up to approximately 65 percent in random write, 55 percent in random read, and 95 percent in sequential write compared to the previous generation. We understand this means the CM7 products as there is no CM8 in the lineup; we’ve asked Kioxia to confirm and the company said: “There was a CM8 Series planned a long time ago, and the SSD model nomenclature is tied to a specific generation of BiCS FLASH 3D memory, which we skipped intentionally, resulting with the CM8 Series SSD getting skipped. We skipped a flash generation to catapult directly to a far superior BiCS FLASH generation 8 technology utilizing CBA architecture (CMOS directly Bonded to Array – enabling for better power, performance, cost, etc.), which is used in the CM9 Series.” 

The company says the CM9 also has performance-per-watt gains of 55 percent better sequential read and 75 percent better sequential write efficiency than the CM7.

The dual-port CM7 used BiCS5 generation 112-layer NAND and a PCIe Gen 5 interface. It provided 2,700,000/310,000 random read/write IOPS with up to 14 GBps sequential read and 6.75 GBps sequential write bandwidth

Kioxia also has slower CD8 and CD8P single-port datacenter drives built with BiCS5 generation 112-layer flash.

Competing drives in the same U.2/E3.S format segment don’t match the CM9’s 3.4 million random read IOPS number. Micron’s 9550 Pro reaches 3.3 million. The FADU Echo achieves 3.2 million. The PS1010 from SK hynix has a 3.1 million number as does Solidigm’s D7-PS1010 and PS1030. Phison’s Pascari is rated at 3 million.

It’s possible that Kioxia could build a U.2 format SSD using a QLC (4 bits/cell) version of its BiCs8 NAND, and achieve a 124 TB capacity level. It’s built 2 Tbit chips this way and is shipping them to Pure Storage for use in its DirectFlash Modules.

Kioxia’s NAND fab joint venture partner Sandisk just announced its WD_BLACK SN8100 drive built with the same BiCS8 flash and also using the PCIe Gen 5 interface. It’s a slower drive, outputting 2.3 million random read IOPS, with much lower capacities of 2, 4, and 8 TB. The partners are aiming at different markets.

The CM9 Series SSDs are sampling to select customers and will be showcased at Dell Technologies World, May 19-22, in Las Vegas.

Backblaze Q1 2025 drive stats show slight rise in failure rates

Backblaze published its disk drive annual failure rates (AFRs) for Q1 2025, noting that the quarterly failure rate went up from 1.35 to 1.42 percent. 

The cloud storage company was tracking 317,833 drives across models that had more than 100 drives in operation as of March 31, and that collectively logged over 10,000 drive-days during the quarter. The supplier/model AFRs were:

We sorted this table by ascending AFR and then graphed it on that basis:

Seagate has three of the top four AFR spots.

Backblaze noted that the four higher-end outlier AFRs, which contributed to the rise in overall AFR from the previous quarter, were:

  • Seagate 10 TB (model ST10000NM0086). Q4 2024: 5.72 percent. Q1 2025: 4.72 percent.
  • HGST 12 TB (model HUH721212ALN604). Q4 2024: 5.15 percent. Q1 2025: 4.97 percent.
  • Seagate 14 TB (model ST14000NM0138). Q4 2024: 5.95 percent. Q1 2025: 6.82 percent.
  • Seagate 12 TB (model ST12000NM0007). Q4 2024: 8.72 percent. Q1 2025: 9.47 percent.

Four drive models had zero failures in the quarter: the 4 TB HGST (model HMS5C4040ALE640), the Seagate 8 TB (model ST8000NM000A), the Seagate 12 TB (model ST12000NM000J), the and Seagate 14TB (model ST14000NM000J).

Overall, Backblaze’s “4 TB drives showed wonderfully low failure rates, with yet another quarter of zero failures from (HGST) model HMS5C4040ALE640 and 0.34 percent AFR from model (HGST) HMS5C4040BLE640.” 

It also tracks lifetime AFRs and the only drive with a more than 2.85 percent AFR is Seagate’s 14 TB (ST14000NM0138) at 5.97 percent for 1,322 drives. But a second 14 TB Seagate drive model, the ST14000NM001G, with 33,817 drives in operation, has a much lower 1.42 percent AFR – problem sorted.

Backblaze has ported its drive AFR stats to Snowflake. Previously it was running SQL queries against CSV data imported into a MySQL instance running on a laptop. The team notes: “The migration to Snowflake saved us a ton of time and manual data cleanup … Gone are the days of us bugging folks for exports that take hours to process! We can run lightweight queries against a cached, structured table.”

The company says that “the complete dataset used to create the tables and charts in this report is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose,” with the following provisos: “1) cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data itself to anyone; it is free.”

Western Digital lends RapidFlex tech to Ingrasys for Ethernet SSD box

Western Digital is supplying NVMe PCIe-to-Ethernet RapidFlex bridge technology to Ingrasys, which will manufacture a fast, Ethernet-accessed box of SSDs for edge location use, cloud providers, and hyperscalers.

Taiwan-headquartered Ingrasys is a Foxconn subsidiary designing and manufacturing servers, storage systems, AI accelerators, and cooling systems for hyperscalers and datacenters. Western Digital is a disk-drive manufacturing business that has split off its NAND/SSD operation as Sandisk. The Ingrasys Top of Rack (TOR) Ethernet Bunch of Flash (EBOF) will, the two say, “provide distributed storage at the network edge for lower latency storage access, reducing the need for separate storage networks and avoiding trips to centralized storage arrays.”

Kurt Chan, Western Digital
Kurt Chan

Kurt Chan, VP and GM of Western Digital’s Platforms Business, stated: “Together with Ingrasys, we continue to accelerate the shift toward disaggregated infrastructure by co-developing cutting-edge, fabric-attached solutions designed for the data demands of AI and modern workloads. This collaboration brings together two leaders in storage infrastructure modernization to deliver flexible, scalable architectures that unlock new levels of efficiency and performance for our customers.”

Why is Western Digital involved with an SSD-filled storage chassis in the first place? This goes back to 2019, when Western Digital, then making both disks and SSDs, acquired Kazan Networks NVMe-oF Ethernet technology. It developed RDMA-enabled RapidFlex controller/network interface cards from this. A RapidFlex C2000 Fabric Bridge, with its A2000 ASIC, functioned as a PCIe adapter, exporting the PCIe bus over Ethernet, with 2 x 100 GbitE ports linked to 16 PCIe Gen 4 lanes. The C2000 could function in both initiator and target mode. The latest RapidFlex C2110 is a SFF-TA-1008 to SFF-8639 Interposer designed to fit the Ingrasys ES2000 and ES2100 EBOF chassis.

Western Digital RapidFlex C2110
RapidFlex C2110

In 2023, Western Digital had an OpenFlex Ethernet NVMe-oF access Just a Bunch of Flash (JBOF), a 2RU x 24-bay disaggregated chassis, the Data24 3200 enclosure, integrating dual-port NVMe SSDs with its RapidFlex fabric bridge supporting both RoCE and NVMe/TCP. This Data24 3200 chassis could connect to up to six server hosts directly, eliminating the necessity for a switch device. A year later, Western Digital showed that it could deliver read and write I/O across an Nvidia GPUDirect link faster than NetApp’s ONTAP or BeeGFS arrays.

The OpenFlex system was envisaged as a way to sell Western Digital’s own SSDs packaged in the JBOF. Since then, the NAND+SSD operation has been split off, becoming Sandisk, and Western Digital is a disk drive-only manufacturing business, with disk drives providing some 95 percent of its latest quarterly revenues. By any measure, this RapidFlex/OpenFlex operation is now a peripheral business. It’s interesting that the Sandisk operation did not get the RapidFlex bridge technology, which is well suited for NVMe JBOF access. Perhaps Western Digital has NVMe-accessed disk drives in its future.

Western Digital claims that its RapidFlex device is the “only NVMe-oF bridge device that is based on extensive levels of hardware acceleration and removes firmware from the performance path. The I/O read and write payload flows through the adapter with minimal latency and direct Ethernet connectivity.” For Ingrasys, “this facilitates seamless, high-performance integration of NVMe SSDs into disaggregated architectures, allowing for efficient scaling of storage resources independently from compute.”

Data24 chassis
Data24 chassis

Ingrasys president Benjamin Ting said: “By combining our expertise in scalable system integration with Western Digital’s leadership in storage technologies, we’re building a foundation for future-ready, fabric-attached solutions that will meet the evolving demands of AI and disaggregated infrastructure. This partnership is just the beginning of what we believe will be a lasting journey of co-innovation.”

The Ingrasys TOR EBOF is targeted for 2027 availability. That’s quite a time to wait for revenue to come in.

HPE aims at VMware refugees with Morpheus upgrades

HPE is developing its Morpheus portfolio to lure VMware customers, adding efficiency, zero data loss, and ransomware resilience guarantees to its flagship Alletra MP B10000 scale-out block access array, and launching new entry-level StoreOnce backup appliances.

The company acquired multi-cloud management platform supplier Morpheus Data, which supplied the software used by HPE’s GreenLake subscription offerings, in August last year. It combined Morpheus features with its in-house KVM-based virtualization offering to create VM Essentials and looked to appeal to VMware customers dissatisfied with Broadcom’s stewardship. This can run standalone or on HPE’s own systems and manages HPE VMs and traditional (VMware) VMs. Now HPE is announcing Morpheus Enterprise Software and integrating VM Essentials into its HPE Private Cloud Business Edition. Both software offerings include the HVM hypervisor from HPE and are licensed per-socket to reduce TCO.

Fidelma Russo, HPE
Fidelma Russo

Fidelma Russo, HPE EVP, CTO, and GM of its Hybrid Cloud business, stated: “Enterprises are at a pivotal moment in IT modernization where they must address escalating management complexity and increasing virtualization costs to free investments for core growth areas. We are the leader in disaggregated infrastructure and our private cloud combines that leadership with new software for unified virtualization and cloud management. HPE is giving customers the choice, simplicity and cost efficiencies to outpace the competition and reinvest in innovation.”

Morpheus Enterprise enables a customer’s IT department to become an internal IT services supplier. It has an interface that can be accessed via a GUI, API, Infrastructure-as-Code, or ITSM plug-ins, and can manage both HPE-native KVM and Kubernetes runtimes alongside other applications and public cloud infrastructure.

The product is hypervisor, hardware, and cloud-agnostic, and integrates with surrounding toolsets like ServiceNow, DNS, backup providers, and task orchestration tools to manage application dependencies end-to-end. HPE claims it accelerates provisioning by up to 150x, cuts cloud costs by up to 30 percent, and reduces risk through granular, role-based access controls.

A combination of VM Essentials and Aruba Networking CX 10000 is claimed to lower TCO by up to 48 percent, increase performance by up to 10x, and provide microsegmentation, DPU (data processing unit) acceleration, and enhanced security.

Morpheus VM Essentials customers can upgrade to Morpheus Enterprise. Morpheus VM Essentials and Morpheus Enterprise software can run on specified Dell PowerEdge servers and NetApp AFF arrays. VM Essentials also provides simple, granular storage management for the Alletra Storage MP B10000. Commvault will be the first VM Essentials ecosystem partner to support image-based VM backup and recovery with an upcoming release in May.

The integration of Morpheus Essentials into Private Cloud Business Edition delivers:

  • Cost efficiency with a socket-based pricing model coupled with independent compute and storage scaling significantly lowers total cost of ownership (TCO) compared to core-based licensing and fixed hardware approaches. It can reduce VM license costs by up to 90 percent.
  • Unified management across HPE’s KVM-based hypervisor and VMware-based virtualization environments means businesses can land workloads on the right platform.
  • Operational Simplicity as it uses AI and automation to streamline setup and lifecycle operations, eliminate routine tasks, and accelerate VM provisioning.
  • Lower TCO by up to 2.5x in datacenters and departmental deployments, while SimpliVity-powered solutions deliver up to 45 percent lower cost at the edge.
  • Unified management across edge, core, and cloud environments.
  • Availability in hyperconverged (HCI) or disaggregated (dHCI) form with external storage.

HPE Advisory and Professional Services now offers Virtualization Modernization services with cost analytics, migration tooling, orchestration blueprints and DevOps pipeline integration.

Alletra guarantees

The Alletra MP (Multi-Processor) B10000 is a disaggregated, block-access, controller and storage node all-flash array system with a 100 percent data availability feature. It shares the Alletra range top spot with the object storage MP X10000 system and these 10000 offerings sit above the Alletra 9000, 6000 and 5000 systems.

HPE Alletra MP B10000
Alletra MP B10000

The MP B10000 cyber resilience guarantee says that customers will have access to an HPE services expert within 30 minutes of reporting an outage resulting from a ransomware incident. It also it assures them that all immutable snapshots created on the B10000 remain accessible for the specified retention period. Compensation will be offered if these commitments cannot be kept.

The energy efficiency guarantee says the B10000 power usage will not exceed an agreed maximum target each month. If the energy usage limit is exceeded, you will receive a credit voucher to offset the additional energy costs. HPE says that unlike competitive energy efficiency SLAs, the B10000 guarantee is applicable whether you purchase your B10000 system through a traditional upfront payment or the HPE GreenLake Flex pay-per-use consumption model. (To qualify for this guarantee, an active HPE Tech Care Service or HPE Complete Care Service contract is required.)

The B10000 is a highly available system. The zero data loss and downtime guarantee specifies that if your application loses access to data during a failover, HPE will provide credit that can be redeemed upon making a future investment in B10000. 

These new guarantees join existing ones available to B10000 customers, including 100 percent data availability, StoreMore data efficiency for at least 4:1 cost savings, and a free, non-disruptive controller refresh for 30 percent lower TCO.

StoreOnce

HPE is introducing StoreOnce 3720 and 3760 appliances intended for use in small and medium businesses (SMB) and remote office/branch office (ROBO) locations. StoreOnce appliances can achieve a claimed 20:1 dedupe ratio and employ multi-factor authentication, encryption and immutability to help combat ransomware. They compete with similar deduping backup target appliances from Dell (PowerProtect), ExaGrid, Quantum (DXI), and Veritas Flex appliances from Cohesity. 

According to HPE, the 3720 and 3760 scale from 18 TB to 216 TB of local usable capacity up to 648 TB usable capacity with optional cloud storage, and achieve backup speeds of up to 25 TB/hour. No data sheets are available, but we can place them in a table with existing StoreOnce systems to see how they rate:

We think the two new appliances will have capacities and speed more than the systems to their left in the table and approaching the systems to their right. Effectively, they replace the systems on their immediate left – the 3620 and the 3660, and possibly the 3640 as well. An HPE spokesperson told us: “The datasheet will be available closer to availability date. The 3720 and 3760 are an improved offering targeting a similar small-to-mid-range customer segment as the 3620, 3640, and 3660 range. We’re still offering the 3660, but haven’t sold the 3620 and 3640 for sometime.” 

Availability

The new guarantees for IT outcomes are available starting today as part of the HPE Storage Future-Ready Program. 

HPE Private Cloud Business Edition with Morpheus VM Essentials is available now. New Business Edition systems with HPE SimpliVity will be available in the third quarter of 2025. Morpheus Software integration for the Alletra Storage MP B10000 is available today and is planned for June for HPE Aruba Networking CX 10000. Morpheus Enterprise Software is available now as standalone software.

The StoreOnce 3720 and 3760 will be available early in the third quarter of 2025.

Sandisk launches fastest gumstick SSD yet

Sandisk just launched the SN8100 from its WD_BLACK range, currently the fastest M.2-format (gumstick) drive available.

It is a PCIe Gen 5 drive and succeeds the PCIe Gen 4 WD_BLACK SN850X with twice the read speed, twice the power efficiency, and the same 8 TB maximum capacity. The old drives used the BiCS4 3D NAND generation with 96 layers, whereas the new drive is built with BiCS8 218 layer chips, formatted as TLC (3 bits/cell), like the SN850X. 

Eric Spanneut, SanDisk
Eric Spanneut

Eric Spanneut, Sandisk VP of devices, stated: “The WD_BLACK SN8100 NVMe SSD with PCIe Gen 5.0 delivers peak storage performance for the most discerning users.” 

That means the 2 TB and 4 TB versions deliver over 2.3 million random read IOPS, while sequential read and write bandwidth numbers are 14.9 GBps and 14 GBps respectively. A comparison with other suppliers’ equivalent PCIe Gen 5 M.2 drives shows that Sandisk’s new drive is the fastest in terms of sequential read and write performance:

Spanneut said: “Whether it’s for high-level gaming, professional content creation, or AI applications, high-performance users now have a PCIe Gen 5.0 storage solution that matches speed with power efficiency to help them build the ultimate gaming rig or best-in-class workstation, enabling them to play and create with next-level performance and reliability.”

The SN8100’s average operating power is 7 W and it offers endurance of up to 2,400 terabytes written (TBW).

Sandisk could potentially build an enterprise version of this drive, perhaps with QLC formatting and the hot-swappable E1.S form factor, and reach 16 TB of capacity, which would be impressive.

This SN8100 drive is available for purchase at sandisk.com and select retailers worldwide in 1 TB ($179.99), 2 TB ($279.99), and 4 TB ($549.99) capacities – US MSRP amounts. The heatsink-equipped version will be available this fall in the same capacities for $20 extra. The 8 TB version, with and without a heatsink, is expected to be available later this year.

Pliops bypasses HBM limits for GPU servers

Key-value accelerator card provider Pliops has unveiled the FusIOnX stack as an end-to-end AI inference offering based on its XDP LightningAI card.

Pliops’ XDP LightningAI PCI card and software augment the high-bandwidth memory (HBM) memory tier for GPU servers and accelerate vLLMs on Nvidia Dynamo by 2.5x. UC Berkeley’s open source virtual large language model (vLLM) library for inferencing and serving uses a key-value (KV) cache as a short-term memory for batching user responses. Nvidia’s Dynamo framework is open source software to optimize inference engines such as TensorRT LLM and vLLM. The XDP LightningAI is a PCIe add-in card and functions as a memory tier for GPU servers. It is powered by ASIC hardware and software, and caches intermediate LLM process step values on NVMe/RDMA-accessed SSDs.

Pliops slide

Pliops says GPU servers have limited amounts of HBM. Its technology is intended to deal with the situation where a model’s context window – its set of in-use tokens – grows so large that it overflows the available HBM capacity, and evicted contexts have to be recomputed. The model is memory-limited and its execution time ramps up as the context window size increases.

By storing the already-computed contexts in fast-access SSDs, retrieving them when needed, the model’s overall run time is reduced compared with recomputing the contexts. Users can get more HBM capacity by buying more GPU servers, but the cost of this is high, and bulking out HBM capacity with a sub-HBM storage tier is much less expensive and, we understand, almost as fast. The XDP LightningAI card with FusIOnX software provides, Pliops says, “up to 8x faster end-to-end GPU inference.”

Think of FusIOnX as AI stack glue for AI workloads. Pliops provides several examples:

  • FusIOnX vLLM production stack: Pliops vLLM KV-Cache acceleration, smart routing supporting multiple GPU nodes, and upstream vLLM compatibility.
  • FusIOnX vLLM + Dynamo + SGLang BASIC: Pliops vLLM, Dynamo, KV-Cache acceleration integration, smart routing supporting multiple GPU nodes, and single or multi-node support.
  • FusIOnX KVIO: Key-Value I/O connectivity to GPUs, distributed Key-Value over network for scale – serves any GPU in a server, with support for RAG/Vector-DB applications on CPU servers coming soon.
  • FusIOnX KV Store: XDP AccelKV Key-Value store, XDP RAIDplus Self Healing, distributed Key-Value over network for scale – serves any GPU in a server, with support for RAG/Vector-DB applications on CPU servers coming soon.
Pliops slide

The card can be used to accelerate one or more GPU servers hooked up to a storage array or other stored data resource, or it can be used in a hyperconverged all-in-one mode, installed in a GPU server, providing storage using its 24 SSD slots, and accelerating inference – an LLM in a box, as Pliops describes that configuration. 

Pliops slide

Pliops has its PCIe add-in-card method, independent of the storage system, to feed the GPUs with the model’s bulk data, independent of the GPU supplier as well. The XDP LightningAI card runs in a 2RU Dell server with 24 SSD slots. Pliops says its technology accelerates the standard vLLM production stack 2.5x in terms of requests per second:

Pliops slide

XDP LightningAI-based FusIOnX LLM and GenAI is in production now. It provides “inference acceleration via efficient and scalable KVCache storage, and KV-CacheDisaggregation (for Prefill/Decode nodes separation)” and has a “shared, super-fast Key-Value Store, ideal for storing long-term memory for LLM architectures like Google’s Titans.”

There are three more FusIOnX stacks coming. FusIOnX RAG and Vector Databases is in the proof-of-concept stage and should provide index building and retrieval acceleration.

FusIOnX GNN is in development and will store and retrieve node embeddings for large GNN (graph neural network) applications. A FusIOnX DLRM (deep learning recommendation model) is also in development and should provide a “simplified, superfast storage pipeline with access to TBs-to-PBs scale embedding entities.”

Comment

There are various AI workload acceleration products from other suppliers. GridGain’s software enables a cluster of servers to share memory and therefore run apps needing more memory than that supported by a single server.  It provides a distributed memory space atop a cluster or grid of x86 servers with a massively parallel architecture. AI is another workload it can support.

GridGain for AI can support RAG applications, enabling the creation of relevant prompts for language models using enterprise data. It provides storage for both structured and unstructured data, with support for vector search, full-text search, and SQL-based structured data retrieval. And it integrates with open source and publicly available libraries (LangChain, Langflow) and language models. A blog post can tell you more.

Three more alternatives are Hammerspace’s Tier Zero scheme, WEKA’s Augmented Memory Grid, and VAST Data’s VUA (VAST Undivided Attention), and they all support Nvidia’s GPUDirect protocols.

Asigra improves SaaS app data restorability

Canadian backup vendor Asigra has unveiled SaaSAssure 2025, its latest data protection platform for SaaS apps, now featuring granular restore and automatic discovery capabilities.

SaaSAssure was launched in summer 2024 with pre-configured integrations to protect customer data with connectors for Salesforce, Microsoft 365, Exchange, SharePoint, Atlassian’s Jira and Confluence, Intuit’s QuickBooks Online, Box, OneDrive, HubSpot, and others. It is available to both enterprises and MSPs so that they can offer SaaS app customer data protection services. SaaSAssure is built on AWS and offers flexible storage options, including Asigra Cloud Storage and Bring Your Own Storage (BYOS). This new release is available to customers in North America, the UK, and the European Union.

Eric Simmons, Asigra
Eric Simmons

CEO Eric Simmons stated: “The international availability of SaaSAssure, including the United Kingdom and Europe, expands our support for MSPs and enterprises who need advanced SaaS backup that goes beyond Microsoft 365 or Salesforce. With expanded Exchange and HubSpot granularity, plus Autodiscovery and UI upgrades, customers gain comprehensive data protection in a way that integrates smoothly with other critical SaaS applications.”

The new features in this release include:

• Exchange Granular Restore for individual mailboxes, folders, emails, contacts, events, and attachments, as well as full backups and mailbox restores.
• HubSpot Granular Restore for specific CRM categories, object groups (e.g. contacts, companies, custom objects), and individual records with or without associated data, and full backup restoration.
• HubSpot Custom Object Restore means previously backed up custom objects are now fully restorable.
• Autodiscovery for Exchange automatically detects and adds new mailboxes – including shared, licensed, and resource types – into domain-level backups.
• Autodiscovery for SharePoint automatically includes newly created SharePoint sites in domain level backups for improved coverage.
• Domain Level SharePoint Backup simplifies multi-site backup management for SharePoint users.
• Intuitive restore interface with a redesigned UI streamlining the recovery process for IT teams and MSPs.
• Configurable email alerts for activities like backup failures to improve incident response.
• Pendo Resource Center Integration offers enhanced in-platform user guidance and support.

New SaaS app connectors are coming for ADP, BambooHR, Docusign, Entra ID, Freshdesk, Trello, and Zendesk. You can be notified about new connectors by filling in a form here.  SaaSAssure is available for immediate deployment.

Xinnor reports rapid growth as xiRAID sales climb sharply

Software RAID supplier Xinnor saw first quarter sales of its xiRAID product reach 86 percent of the company’s total revenue for all of 2024.

Israel-based Xinnor’s xiRAID provides a local block device to the system, with data distributed across drives for faster access. It has a declustered RAID feature for HDDs, which places spare zones over all drives in the array and restores the data of a failed drive to these zones, making drive rebuilds faster. The software supports NVMe, SAS, and SATA drives, and works with block devices, local or remote, using any transport – PCIe, NVMe-oF or SPDK target, Fibre Channel, or InfiniBand. Xinnor says its recent growth has been driven by a series of strategic partnerships, including a major agreement with Supermicro, and an expanded global reseller channel.

Davide Villa, Xinnor
Davide Villa

A statement from chief revenue officer Davide Villa said: ”The momentum we’ve built in Q1 is truly exceptional. Our patented xiRAID technology is proving to be a game-changer in the data storage market. The fact that in one quarter we achieved what took us all last year to accomplish demonstrates the accelerating market recognition of our unique value proposition.

”We are extremely proud that several leading institutions around the world selected xiRAID to protect and accelerate access to critical data for innovative AI projects. The channel partner extension and the reseller agreement with Supermicro will enhance our reach, enabling more customers to experience the performance lead of xiRAID.”

New resellers include:

  • APAC: Xenon Systems in Australia, CNDfactory in South Korea, DigitalOcean in China
  • Europe: NEC Deutschland GmbH in Germany, HUB4 in Poland, 2CRSI in France, and BSI in the UK
  • Americas: Advanced HPC, SourceCode, Colfax International in the US

And recent customer wins:

  • A leading financial company deployed xiRAID across all the NVMe servers within its datacenters.
  • Two major universities in Central Europe, active in advanced AI research, implemented xiRAID in high-availability mode in two independent all-NVMe storage clusters, for over 20 PB. 
  • The Massachusetts Institute of Technology (MIT) deployed xiRAID to protect around 400 NVMe drives for a variety of use cases.

We think Xinnor is benefiting from a rise in AI workloads needing RAID-protected NVMe SSDs.

VAST Data adds vector search and deepens Google Cloud ties

VAST Data has added vector search to its database and integrated its software more deeply into Google’s cloud.

The database is part of its software stack layered on top of its DASE (Disaggregated Shared Everything) storage foundation along with the Data Catalog, DataSpace, unstructured DataStore and DataEngine (InsightEngine). Generative AI large language models (LLMs) manipulate and process data indirectly, using hashed representations – vector embeddings or just vectors – of multiple dimensions of an item. An intermediate abstraction of word in text documents is a token. These are vectorized and a document item’s vectors are stored in a multi-dimensional space with the LLM searching for vectors as it computes steps in its generation of a response to user requests. This is called semantic searching.

A VAST Data blog by Product Marketing Manager Colleen Quinn says: “Vector search is no longer just a lookup tool; it’s becoming the foundation for real-time memory, context retrieval, and reasoning in AI agents.”

Vectors are stored by specialized vector database suppliers – think Pinecone, Weaviate and Zilliz – and are also being added as a data type by existing database suppliers. Quinn says that the VAST Vector Search engine “powers real-time retrieval, transactional integrity, and cross-modal governance in one platform without creating new silos.” 

In the VAST world, there is a single query engine, which can handle SQL and vector and hybrid queries. It queries VAST’s unstructured DataStore and the DataBase, where vectors are now a standard data type. Quinn says: “Vector embeddings are stored directly inside the VAST DataBase, alongside traditional metadata and full unstructured content to enable hybrid queries across modalities, without orchestration layers or external indexes.”

“This native integration enables agentic systems to retrieve memory, reason over metadata, and act – all without ETL pipelines, external indexes, or orchestration layers.”

“The system uses sorted projections, precomputed materializations, and CPU fallback paths to maintain sub-second performance – even at trillion-vector scale. And because all indexes live with the data, every compute node can access them directly, enabling real-time search across all modalities – text, images, audio, and more – without system sprawl or delay.”

“At query time, VAST compares the input vector to all stored vectors in parallel. This process uses compact, columnar data chunks to prune irrelevant blocks early and accelerate retrieval.”

“Future capabilities will expand beyond vector search, enabling new forms of hybrid reasoning, structured querying, and intelligent data pipelines.” Think multi-modal pipelines and intelligent data preparation.

Google Cloud

Building on its April 2024 announcement that it had ported its Data Platform software to Google’s cloud, enabling users to spin up VAST clusters there, VAST has now gone further. It says its Data Platform “is fully integrated into Google Cloud – offering a unified foundation for training, retrieval-augmented generation (RAG), inference, and analytics pipelines that span across cloud, edge, and on-premises environments.”

Renen Hallak, VAST founder and CEO, spoke of a “leap forward,” stating: “By combining the elasticity and reach of Google Cloud with the intelligence and simplicity of the VAST Data Platform, we’re giving developers and researchers the tools they need to move faster, build smarter, and scale without limits.”

The additional VAST facilities now available on GCP include:

  • InsightEngine enabling developers and researchers to run data-centric AI pipelines—such as RAG, preprocessing, and indexing—natively at the data layer.
  • DataSpace with its exabyte-scale global namespace which connects data on-premises, at the edge, and in Google Cloud as well as other hyperscalers for data access and mobility.
  • Unified file (NFS, SMB), object (S3), block, and database access.

VAST says customers can run AI, ML, and analytics initiatives without operational overhead and unify their AI training, RAG pipelines, high-throughput data processing, and unstructured data lakes on its single, high-performance platform.

The base VAST software has already been ported to AWS, with v5.2 available in the AWS Marketplace. We understand v5.3 is the latest version of VAST’s software. 

There is limited VAST availability on the Azure Marketplace, where “VAST’s virtual appliances on Azure allow customers to deploy VAST’s disaggregated storage processing from the cloud of their choice. These containers are free of charge and customers interested in deploying Universal Storage should contact VAST Data to get their capacity under management. This product is available as a Directed Availability release.”

Comment

With its all-in-one storage and AI stack, VAST Data is becoming the equivalent of a software AI system infrastructure mainframe environment, built from modular storage hardware boxes, NMVe RDMA links to x86 and GPU compute, not forgetting Arm (BlueField). Both compute and storage hardware are commodities for VAST. But the software is far from a commodity. It is VAST’s core proprietary IP, and being developed and extended at a high rate, with a promise of being uniformly available across the on-premises environment and the AWS, Azure, and Google clouds. For better or worse, as far as we are aware, no other storage nor system data infrastructure company is working on such a broad and deep AI stack at the same pace.

DRAM and NAND: Micron and SK Hynix’s paths to production

Analysis: There are two companies highly focused on DRAM and NAND production – Micron and SK hynix. Both are competing intensively in enterprise SSDs and high-bandwidth memory but came to their dual market focus in involved and indirect ways, via early sprawling business expansion, with mis-steps and inspired moves enroute. 

One was blessed by Intel and one was cursed. Micron went into bed with Intel and the ill-fated Optane technology, which crashed and burned, while SK hynix bought troubled Intel’s SSD and NAND fab business, and went fast into the high-capacity SSD market, which took off and is flying high. It also stopped Western Digital merging with Kioxia, and then pushed early into the high-bandwidth memory (HBM) business and is now soaring upwards on Nvidia’s GPU memory coat tails.

Micron

Micron was started up in 1978 in Boise, Idaho by Ward Parkinson, Joe Parkinson, Dennis Wilson, and Doug Pitman as a semiconductor design operation. It started fabbing 65K DRAM chips in 1981 and IPO’d in 1984. A RISC CPU project came and went in the 1991-1992 period. Micron acquired the Netframe server business in 1997. It entered the PC business but exited that in 2002, and bought into the retail storage media business by buying Lexar in 2006

Micron entered the flash business in 2005 via a joint venture with Intel. It bought Numonyx, which made flash chips, in 2010, for $1.27 billion. It then developed its memory business by buying Elpida Memory in 2013, giving it an Apple iPhone and iPad memory supply business, also buying PC memory fabber Rexchip and Innotera Memories in 2016.

However, Micron entered into what was, with hindsight, a major mis-step in 2013 by joining Intel in the Optane 3D XPoint storage-class memory business and manufacturing the phase-change memory technology chips. It was even involved in producing its own branded QuantX 3D XPoint chips – but these went nowhere.

Despite Intel pouring millions of dollars into Optane, the technology failed to take off, with production volumes never growing large enough to lower the per-chip cost and so enable profitable manufacture. Eight years later, in March 2021, Micron cancelled the collaboration and walked away, stopping Optane chip production. Intel saw the writing was on the wall and canned its Optane business in mid-2022.

Ironically, Intel sold its NAND and SSD business to SK hynix in 2021, the same year that Micron up-ended the Optane collaboration. If only Micron had been in a position to buy that business it would now have to a stronger SSD market position.

Sanjay Mehrotra became Micron’s CEO in 2017 and it was he who pushed Optane out of Micron’s door. He also sold off the Lexar business to focus on DRAM and NAND.

A look at Micron’s revenues and profits from 2016 to date show a pronounced shortage and glut, peak and trough pattern, characteristic of the DRAM and NAND markets:

During the Optane period from 2013 to 2021, Micron diverted production capacity and funding away from DRAM and NAND to Optane and, with hindsight again, we could say it would be a larger company now, revenue-wise, if had not done that.

SK hynix

SK hynix has a more recent history than Micron. It was founded as Hyundai Electronics Industries Co., Ltd. by Chung Ju-yung in 1983, as part of the Hyundai Group. It produced SRAM product in 1984 and DRAM in 1985. The company built a range of products including PCs, car radios, telephone switchboards, answering machines, cameras, mobile phones and pagers. It sprawled even more than the early Micron in a product sense.

Hyundai Electronics Industries bought disk drive maker Maxtor in 1993 and IPO’d in 1996. It bought LG Semiconductor in 1998. In 2000 and, in financial difficulties caused by DRAM price drops, it restructured with subsidiaries spun off. It rebranded its core business as Hynix Semiconductor in 2001 and then was itself spun out of the Hyundai Group. 

More subsidiary divestitures followed in 2002 and 2004. The business then recovered but not for long as it defaulted on loans and went through a debt-for-equity swap. Its lenders put it up for sale in 2009 and Hynix partnered HP to productize Memristor technology, but that was a bust. 

Hynix was fined for price-fixing in 2010, to add more trouble, and was eventually acquired in 2012 by SK Telecom for $3 billion. SK Telecom rebranded it as SK hynix with a focus on DRAM and NAND and it prospered and is prospering. SK hynix is headquartered in Icheon, South Korea.

The company was part of the Bain consortium which purchased a majority share in the financially troubled Toshiba Memory Systems NAND business in 2017. This business had a NAND fab joint venture with Western Digital and was rebranded as Kioxia. 

SK hynix then bought Intel’s NAND business for $9 billion in 2021 in a multi-year deal; it completed earlier this year, and incorporated it in its Solidigm division, with NAND fabs in Dalian, China. This gave it an excellent position in the high-capacity SSD market and cemented its twin focus on DRAM and NAND.

A merger between Western Digital and Kioxia was suggested as a way forward for Kioxia in 2023 but eventually called off after SK hynix apparently blocked it. A combined Kioxia-WD business would have had a larger NAND market share than SK hynix, and a single NAND technology stack, lowering its costs.

SK hynix has two stacks: its own and Solidigm’s, and faced being relegated to number three in the market, with an 8.5 percent market share, behind Samsung and Kioxia-WD, both with about 33 percent. 

Currently Samsung has a leading 36.9 percent share and SK hynix + Solidigm is in second place with 22.1 percent. These two are followed in declining order by Kioxia (12.4 percent), Micron (11.7 percent), Western Digital (now Sandisk and 11.6 percent) and others.

Solidigm took an early lead in high-capacity enterprise SSDs in 2024, with a 61.44 GB QLC drive and then a 122 TB drive in late 2024. This was well timed for the rapid rise in demand for fast access to masses of data needed for generative AI processing. Micron delivered its own 61.44 TB SSD later in 2024.

SK hynix started mass-producing high-bandwidth memory (HBM) in 2024 and has become the dominant supplier of this type of memory, needed for GPU servers, to Nvidia. As of 2025’s first quarter SK hynix holds 70 percent of the HBM market, with Micron and Samsung sharing the rest, proportions unknown. Micron says its HBM capacity is sold out for 2025 while Samsung’s latest HBM  chips are being qualified.

Revenue-wise SK hynix and Micron were neck and neck in the DRAM/NAND market glut in mid-2023. But since then SK hynix has grown its revenues faster, led by HMB, with a widening gap between the two. 

It seems unlikely that, absent SK hynix mis-steps, Micron will catch up. Its possibilities for catching up in the DRAM market could include getting early into the 3D DRAM business. Any NAND market catchup seems more likely to come from an acquisition; Sandisk anyone?

The two companies, Micron and SK hynix, have both been radically affected by Intel; Micron by the loss-making and jinxed Optane technology, and SK hynix by its market share-expanding Solidigm acquisition. Intel’s CEO at that time, Pat Gelsinger, said Intel should never have been in the memory business. Because it was, it helped hang an albatross around Micron’s neck and gave SK hynix a Solidigm shove upwards in the NAND business. Icheon benefited while Boise did not.

Commvault and Deloitte team up on enterprise cyber resilience

Enterprise data protector Commvault is allying with big four accountancy biz Deloitte pitching to customers who are trying to be more resilient against cyber threats.

Commvault has been layering cyber resilience features on top of its core data protection facilities. It recently improved its Cleanroom Recovery capabilities and has a CrowdStrike partnership to detect and respond to cyberattacks. A CrowdStrike alert to the Commvault Cloud can trigger a ThreatScan check for affected data, and restore compromised data to a known good state using backups. Deloitte has a set of Cyber Defense and Resilience services, including forensic specialists who investigate cyber-incidents and help contain and recover from them.

Alan Atkinson, Commvault
Alan Atkinson

Alan Atkinson, chief partner officer at Commvault, stated: “By combining Commvault’s cyber resilience technologies with Deloitte’s deep technical knowledge in cyber detection and response, we are creating a formidable defense for our joint customers against today’s most sophisticated cyber threats.” 

The two aim to integrate Commvault’s cyber resilience services with Deloitte’s cyber defense and response capabilities to help businesses maintain operational continuity before, during, and after a cyber incident. Such services might have mitigated the impact on UK retailers like Marks & Spencer, Co-Op, and Harrods during their recent cyberattacks. Deloitte, coincidentally, is Marks & Spencer’s auditor.

Specifically, before an attack, Commvault and Deloitte will assist organizations in understanding and defining their minimum viability – the critical set of applications, assets, processes, and people required to operate their business following an attack or outage. Once defined, Commvault’s Cleanroom Recovery can assist enterprises in assessing their minimum viability state and testing their recovery plans in advance.

Then, during an attack, the two say Deloitte’s cyber risk services combined with Commvault’s AI-enabled anomaly detection capabilities help joint clients identify and mitigate potential threats before they escalate.

After an attack, during the recovery phase, Deloitte’s incident response capabilities combined with the Commvault Cloud platform, which includes resilience offerings like Cloud Rewind, Clumio Backtrack, and Cleanroom Recovery, help customers “quickly recover, minimize downtime, and operate in a state of continuous business.” 

David Nowak, Deloitte
David Nowak

David Nowak, Principal, Deloitte & Touche LLP, said: “Together, we are offering a strategic and broad solution that not only helps our clients fortify their defenses but also helps with recovering from outages and cyberattacks.” 

Commvault competitors Cohesity, Rubrik, and Veeam partner with the main IT services firms, such as Deloitte, EY, KPMG, and PwC, on a tactical basis but don’t have strategic alliances with them.

Find out more about Commvault and Deloitte at a Commvault microsite.

Bootnote

Commvault’s Azure infrastructure was breached by a suspected nation-state actor at the end of April, but its customer backup data was not accessed. A Commvault Command Center flaw, CVE-2025-34028, is being fixed.