Home Blog Page 7

N-able posts Q1 loss as revenue growth slows

Data protection vendor N-able reported a loss and lower growth in its first 2025 quarter.

John Pagliuca

The company supplies data protection and security software to more than 25,000 managed service providers (MSPs) who supply services to small and mid-market businesses in turn. It also sells to distributors, SIs, and VARs. Revenues in the initial 2025 quarter were $118.2 million, up 3.9 percent year-on-year and above its guidance range, with a GAAP loss of $7.2 million. Subscription revenues grew 4.8 percent to $116.8 million and its gross margin was 76.6 percent.

N-able president and CEO John Pagliuca stated: “Our earnings reflect continued progress advancing cyber-resiliency for businesses worldwide. The launch of new security capabilities, strong addition of channel partners in our Partner Program, and our largest new bookings deal ever showcase that N-able is innovating and growing. We look forward to building on this progress throughout the year.”

The loss was due to increased cost of revenue, operating expenses, and acquisition costs.

Pagliuca said in the earnings call that N-able had signed “our largest new bookings deal ever.”

A revenue history chart shows N-able’s growth rate has been declining:

N-able’s net retention rate (NRR) is 101 percent, which indicates some customer revenue growth and low customer churn. It was 103 percent in 2024, 110 percent in 2023, and 108 percent in 2022. The higher the NRR, the higher a company’s growth rate. A 100 percent NRR indicates any lost revenue from customer churn is offset by increased revenue from new customers, upsells, and cross-sells. A less than 100 percent NRR suggests a business is keeping most of its customers but not growing its revenue as much as it could. N-able’s NRR is not in this category but not much above it either.

The issue is that N-able’s quarterly revenue growth rate has been slowing, with seven declining quarters since a high point in 2023’s second quarter, as a second chart indicates:

Delivering cyber-resiliency services through MSPs is akin to a franchise model. Its revenues should increase as existing MSP partners bring on new clients and grow revenue from existing clients, and also from recruiting new MSP partners. As the chart above shows, N-able’s franchisees are growing its revenues at a declining rate.

The cybersecurity-focused acquisition of Adlumin last November was intended to scale its security portfolio, giving MSP partners new services to sell to their clients and channel. The Adlumin deal brings in distributors, VARs, and SIs, giving N-able access to more channel partners and cross-selling scope.

Pagliuca expects the NRR percentage to grow, saying: “The improvements we’re looking to drive will be driven mostly through the cross-sell opportunity that we have within the customer base.”

N-able’s second quarter outlook is $126 million ± $500,000, a 5.5 percent year-on-year increase at the midpoint. Its full 2025 outlook is $494.5 million ± $2.5 million, a 6 percent year-on-year increase at the midpoint, which suggests that it sees its revenue growth rate staying above 5 percent in the second, third, and fourth quarters.

Analyst Jason Ader said: ”While the company is going through multiple simultaneous transitions this year (channel expansion, investments in security products, and cross-sell/platformization motions), which are pressuring profitability and creating some execution risk, we like management’s aggressive posture and believe it holds the promise of a larger TAM, faster growth, and greater operating leverage in the future.”

N-able has instituted a $75 million share buyback program.

Storage news ticker – May 8

Data protector Arctera has updated its InfoScale cyber resilience product, saying it features:

  • Real-time, application-aware resilience: Arctera InfoScale spans both data and applications, enabling real-time recovery and proactive resilience management to minimize downtime. 
  •  Cyber-ready operational defense: With built-in immutable snapshots and zero-trust principles, InfoScale ensures tamper-proof data recovery, protecting against ransomware attacks and emerging threats like AI-related downtime. 
  •  Proactive recovery: By integrating continuous system monitoring and automated, application-aware response actions, Arctera InfoScale empowers IT teams to shift from reactive to proactive disaster recovery, all while maintaining business continuity.

ADP (Assured Data Protection) is partnering with Nutanix to deliver “a first-of-its-kind backup and DR solution to customers,” Nutanix Disaster Recovery as-a-Service. It requires no investment in facilities or hardware and operationalizes Nutanix disaster recovery solutions, providing protection to customers in over 70 countries worldwide. It can be operational within hours, providing Nutanix customers with a robust backup and DR service, delivered by this global data backup and disaster recovery managed service provider, ADP.

ADP has set up an Innovation Team, a strategic initiative aimed at expanding the company’s DR, backup, and cyber resiliency services with the addition of new technologies that complement current data protection services.

Cloud storage supplier Backblaze reported revenues of $34.6 million in the first 2025 quarter, up 15 percent year-over-year, with a loss of $9.3 million compared to an $11.1 million loss a year ago. B2 cloud storage revenues were $18 million, up 23 percent, while Computer Backup revenues were $16.6 million, up 8 percent year-over-year but down from the prior quarter’s $16.7 million. Analyst Jason Ader said Backblaze closed its largest deal in the quarter, a multi-year, multimillion-dollar contract with an application customer. It’s predicting Q2 revenues of $35.2 million to $35.6 million.

Backblaze said: “A false and misleading short-and-distort report recently raised claims about our financial statements. An independent review confirmed there was no wrongdoing and no issues with our financial statements. For further information, please listen to our earnings call listed below and see our blog entitled ‘Setting the Record Straight’ here.”

Databricks has appointed two EMEA execs: Nico Gaviola as VP, Emerging Enterprise and Digital Natives, and Daniel Holz as VP CEMEA. Gaviola brings over a decade of leadership experience from Google Cloud, where he was Director of Data and AI, South EMEA. At Databricks, he will help emerging enterprises and digital native businesses such as Flo Health, Kraken, and Skyscanner seamlessly adopt the Data Intelligence Platform. Holz joins from Oracle, where he was SVP of North East Europe, responsible for leading the cloud technology division. He has also held leadership positions at Google Cloud and SAP.

Gartner has produced a Market Guide for Hybrid Cloud Storage. Its recommendations are:

  • Take advantage of hybrid cloud storage capabilities by identifying workloads, datatypes and use cases that will benefit from integration with the public cloud.
  • Build a business case for hybrid cloud storage beyond just the price per terabyte by valuing the end-to-end hybrid workflow and standardization enabled by the solutions.
  • Prioritize hybrid cloud storage solutions that enable cloud-native data access capability to best support applications within the public cloud.
  • Choose a hybrid cloud storage provider by its ability to deliver additional services, such as metadata insights, cyberstorage, global access, life cycle management, multi-cloud support, performance acceleration, and data analytics and mobility.
  • Build a comprehensive hybrid cloud data services catalog to define and maintain global hybrid cloud storage services and to ensure standardization and end-user transparency.

You can download a copy courtesy of Nasuni here.

Hazelcast, which supplies combined distributed compute, in-memory data storage, stream processing and integration for enterprise AI applications, is working with IBM to bring its data caching, data integration, and distributed computing capabilities to LinuxONE and Linux on the Z mainframe. Learn more here.

3D DRAM developer NEO Semiconductor has produced industry-first 1T1C and 3T0C-based 3D X-DRAM cells whose designs combine the performance of DRAM with the manufacturability of NAND and density up to 512 Gb – a 10x improvement over conventional DRAM. They use IGZO channel technology and manufacturing will use a modified 3D NAND process, with minimal changes, enabling full scalability and rapid integration into existing DRAM manufacturing lines. 1T1C and 3T0C cell simulations demonstrate retention times of up to 450 seconds, dramatically reducing refresh power. TCAD (Technology Computer-Aided Design) simulations confirm fast 10-nanosecond read/write speeds and over 450-second retention time. NEO says it employs unique array architectures for hybrid bonding to significantly enhance memory bandwidth while reducing power consumption. Proof-of-concept test chips are expected in 2026.

NEO 1T1C 3D DRAM cell

NEO Semiconductor’s technology platform now includes three 3D X-DRAM variants:

  • 1T1C (one transistor, one capacitor) – The core design for high-density DRAM, fully compatible with mainstream DRAM and HBM roadmaps.
  • 3T0C (three transistor, zero capacitor) – Optimized for current-sensing operations, ideal for AI and in-memory computing.
  • 1T0C (one transistor, zero capacitor) – A floating-body cell structure suitable for high-density DRAM, in-memory computing, hybrid memory and logic architectures.

Graph database and analytics player Neo4j announced Aura Graph Analytics, a serverless offering that delivers the power of graph analytics to users of all skill levels, unlocking deeper intelligence and achieving 2x greater insight precision and quality over traditional analytics. Neo4j Aura Graph Analytics is generally available now on a pay-as-you-use basis and works with all databases, such as Oracle and Microsoft SQL, all cloud data warehouses and data lake platforms, such as Databricks, Snowflake, Google BigQuery, Microsoft OneLake, and on any cloud. It removes the need for custom queries, ETL pipelines, or any need for specialized graph expertise.

Neo4j Graph Analytics for Snowflake, a native integration, will be generally available in Q3. Visit the website and blog for more details.

NetApp has recruited two US sales execs. Jim Gannon joins as VP of Strategic Sales, bringing experience from Sysdig, Pure Storage, and VMware, with a track record of scaling high-performing global teams. Darrin Hands returns to NetApp as VP of Corporate, Midmarket and SMB Sales, after leading commercial sales at Pure Storage and previously spending four years at NetApp.

Other World Computing (OWC) has launched the My OWC iOS app, “an all-in-one mobile companion that makes setup, troubleshooting, and staying updated effortless. From real-time firmware update alerts to instant access to support to tailored how-to guides, the app turns every OWC device into a smarter, more connected experience.” The My OWC app is available now as a free download from the Apple App Store here.

Panmnesia showcased its high fan-out CXL 3.x Switch offering at CXL DevCon 2025. This is designed for next-generation AI infrastructure and high-performance computing (HPC) systems – including retrieval-augmented generation (RAG), large language models (LLMs), and scientific simulations. The demo featured a CXL 3.x Composable Server consisting of multiple CXL-enabled server nodes interconnected via Panmnesia’s CXL 3.x Switch. Each node featured disaggregated CPU, GPU, and memory resources powered by Panmnesia’s CXL IP. This composable architecture enables dynamic system configuration based on workload demands.

Panmnesia has launched a $30 million project focused on developing next-generation AI infrastructure products. It’s going to develop chiplet-based modular AI accelerators that integrate next-generation memory functions, including in-memory processing. The new AI accelerators can be used to accelerate the execution of large-scale AI services such as RAG and recommendation systems. The new products will optimize overall cost, enhance resource utilization, and reduce power consumption in AI infrastructure, while delivering high performance. 

Teradata has announced a new data integration with ServiceNow’s Workflow Data Fabric that would fuel AI agents, autonomous workflows and analytics at scale. Its integration with ServiceNow Workflow Data Fabric ensures joint customers can use AI agents to access enterprise-wide data in real-time. Teradata will enable access to data through a Zero Copy connector within the ServiceNow Workflow Data Network, Teradata’s hybrid, multi-cloud analytics and data platform for Trusted AI is now part of ServiceNow Workflow Data Network – an ecosystem of more than a hundred enterprise data partners.

Western Digital has appointed Kris Sennesael as CFO. He most recently served as CFO at Skyworks Solutions.

PeerGFS adds simultaneous multi-protocol file access

The latest release of PeerGFS can access the same file through SMB and NFS protocols at the same time.

PeerGFS (GFS stands for Global File Service) provides real-time, active-active replication of file volumes between datacenters, the public cloud, and edge locations. It has a multi-master approach based on the idea that a distributed organization’s files should be treated as dynamic entities without a fixed location, with the source of truth constantly updated and distributed as needed. The Peer software already supports both SMB and NFS file protocols when used to access separate files. Now it can provide SMB and NFS access to a file volume at the same time across multiple storage systems and geographic distances.

Jimmy Tam, PeerGFS
Jimmy Tam

PeerGFS CEO Jimmy Tam stated: “Multi-protocol support breaks down barriers that have long forced IT teams to create redundant copies of data for different applications or environments. Now, whether you’re ingesting data via SMB at the edge or analyzing it with AI engines using NFS-based storage in the core or cloud, PeerGFS ensures a single, synchronized and accessible dataset.”

This latest v6.20 version of PeerGFS provides support for Amazon FSxN, Dell PowerScale, NetApp ONTAP, and Nutanix Files, and adds Linux file server support for kernel 5.9 and above to its existing Windows server and multi-supplier file storage array support. There is also improved data management and storage optimization for edge locations plus PostgreSQL support.

The release adds signature checking to its Malicious Event Detection (MED) feature, as well as the ability to update parts of the MED configuration while jobs are running.

Peer says the simultaneous multi-protocol support can be “particularly impactful for AI workflows, where data is often ingested at the edge using SMB and processed centrally using Linux-based tools that typically require NFS.”

PeerGFS graphic
PeerGFS graphic

Another example use case is a medical organization that ingests patient MRI scans at local hospitals on SMB-based storage and automatically synchronizes that data to centralized NFS-based storage for analysis with AI. There is now no need for redundant volumes or manual data transfers. Peer claims this can help reduce wait times for diagnosis. Download the PeerGFS datasheet here.

Sandisk slides into loss after split from Western Digital

In its first reported results since being split off from Western Digital, Sandisk reported revenue and profit declines for its NAND and SSD business.

David Goeckeler, SanDisk
David Goeckeler

Revenues in the quarter ended March 28 were $1.7 billion, down 0.6 percent year-on-year and 10 percent sequentially. Sandisk said its revenues were above the guidance range. After a $1.83 billion goodwill impairment charge, there was a GAAP loss of $103 million, contrasting with the year-ago $27 million profit.

CEO David Goeckeler stated: “I’m pleased with our team’s execution in the first quarter as a standalone company. Sandisk’s innovation was reinforced, with a strong early ramp of BiCS 8, our latest technology engineered to deliver industry-leading performance, power efficiency, and density. We have taken actions to reduce supply to match demand and commenced price increases this quarter. Our investment, supply management, and pricing strategies will remain focused on maximizing returns.” We’re told bit shipments were down by low single digits.

Cloud (datacenter) segment revenues were down 21 percent quarter-on-quarter to $197 million. Client (PC, notebook) segment revenues of $927 million were down 10 percent quarter-on-quarter while consumer (retail) revenues declined 5 percent quarter-on-quarter to $571 million. 

Sandisk said it expanded its hyperscaler market share in the cloud segment, with 12 percent of its bit shipments going that way compared to 8 percent a year ago.

Client revenues declined despite expected demand drivers such as the Windows 10 end-of-life replacement cycle, a post-COVID refresh, and PCs requiring more storage.

Financial summary

  • Gross margin: 22.7 percent vs year-ago 27.4 percent
  • Operating cash flow: $26 million vs year-ago $-12 million
  • Free cash flow: $220 million vs year-ago $87 million
  • Cash & cash equivalents: $1.5 billion vs $377 million a year ago
  • Diluted EPS: $-0.30 vs year-ago $0.57

CFO Luis Visoso discussed the impairment charge for Sandisk’s intangible goodwill asset, saying: “This quarter, we evaluated our goodwill for potential impairment following a quantitative test in accordance with accounting standards and the engagement of valuation specialists. We concluded that the goodwill balance was impaired and recorded a non-cash impairment charge of $1.83 billion. As a result, our quarter-end goodwill balance was reduced to $5 billion.”

A mini-NAND glut is affecting prices. Goeckeler said in the earnings call: “ASPs were down high-single digits, reflecting continued oversupply in the market. This was higher than our mid-single-digit decline expectation that we shared at our Analyst Day. To address this, we are extending our fab underutilization actions until supply and demand are balanced and we see a sustainable recovery in pricing.”

Sandisk actually raised its prices after the end of the third quarter.

Goeckeler said its BiCS 8 218-layer 3D NAND technology is producing 2 Tbit QLC chips and these are “in qualification with top cloud service providers for use in 128 terabyte and 256 terabyte capacity SSDs.” He mentioned PCIe Gen 5 and 6 connections for the QLC drives and thinks the 256 TB product could come over the next year, meaning 2026.

There is also a new SSD controller coming. “We have a new architecture coming out in the next couple of quarters that we call Stargate, new ASIC, clean sheet design and then, with BiCS 8 QLC … we just think that’s going to be a dynamite project,” Goeckeler said.

BiCS 8 “TLC products are being qualified by customers for high performance mobile and compute applications.” In the automotive market, “we shipped the industry’s first UFS 4.1 samples for autonomous driving, where performance, reliability and power efficiency are critical.”

Market adoption will be slow. “Development is underway with key partners, and early samples are being used in the next generation EV platforms in autonomous robotics. We expect to complete qualification in the coming months, paving the way for broader adoption in the next generation automotive compute platforms, including advanced driver assistance systems and robotics, beginning in the second half of calendar year 2026.”

Concluding, Goeckeler said: “We estimate that the NAND industry is poised for robust long term growth with demand expected to approach $100 billion by the end of the decade. We expect growth to be driven by the exponential expansion of data, fueled in part by the deployment of artificial intelligence in cloud and edge applications as well as refresh cycles in PCs and mobile devices. In data center, we continue to see strong capital investments in the emergence of new AI-driven workloads, which are fueling use cases for enterprise SSDs and expanding NAND’s addressable market.”

But what about US tariffs implemented under the Trump administration? Visoso said: “Our key assumption in our guidance is that the current tariffs remain unchanged throughout the quarter. At present, there are no tariffs on our products except for shipments from China to The US, which have tariffs of 27.5 percent. For perspective, approximately 20 percent of our products shipped to The United States and over 95 percent of that revenue is sourced from countries other than China.”

Goeckeler said R&D and fab costs for higher-layer counts were substantial. Each layer count advance enables higher bit output, which tends to drive down prices but doesn’t drive costs down as well. Goeckeler’s concern is “making sure we have the right profitability to drive all the capital investment required to support this demand.” This implies that NAND/SSD pricing relative to disk drives is not going to change that much. In the mid-term, Sandisk sees “an undersupplied market through the end of next year.”

Sandisk expects demand to strengthen throughout the year. Next quarter’s outlook is $1.8 billion ± $5 million in revenues, a 2.2 percent increase year-on-year at the midpoint.

IBM has a THINK, boards the agentic enterprise AI train

IBM is giving its customers a tidal wave of watsonx AI news at its THINK 2025 conference, saying that AI agents are shifting from AI that chats with you to systems that work for you.

It  wants to supply the AI system building components, saying AI agents “must be able to work seamlessly across the vast web of applications, data and systems that underpin today’s complex enterprise technology stacks. Which means that  orchestration, integration and automation are necessary to move agents from novelty into operation.”

Chairman and CEO Arvind Krishna stated: “The era of AI experimentation is over. Today’s competitive advantage comes from purpose-built AI integration that drives measurable business outcomes. IBM is equipping enterprises with hybrid technologies that cut through complexity and accelerate production-ready AI implementations.”

There are a large number of announcements to do with IBM’s watsonx AI and data offerings, also covering AI agent workflows, code assistance, feeding unstructured data to RAG apps, expanded GPU and accelerator coverage, Db2 developments, and AI-infused partnering. Let’s dive in to see what it’s introducing:

watsonx 

IBM is evolving watsonx.data to help organizations use unstructured data, found in contracts, spreadsheets, presentations, etc. to have more accurate, effective AI. A watsonx.data development will bring together an open data lakehouse with data fabric capabilities, like data lineage tracking and governance, to help customers unify, govern, and access data across silos, formats, and clouds. 

IBM says its testing shows that enterprises connecting their AI apps and agents with their unstructured data using watsonx.data can get up to 40 percent more accurate AI than conventional RAG.

There is watsonx.data integration, a single-interface tool for orchestrating data across formats and pipelines, and watsonx.data intelligence, which uses AI-powered technology to extract insights from unstructured data.

IBM is adding watsonx as an API provider within Meta’s Llama Stack, which it says is “enhancing enterprises’ ability to deploy generative AI at scale and with openness at the core.”

Both watsonx.data integration and watsonx.data intelligence will be available as standalone products, with select capabilities also available through watsonx.data. More info here.

IBM is introducing an upcoming watsonx Code Assistant for i, purpose-built to accelerate the production of IBM i applications. It is expected to accelerating RPG modernization tasks with AI-powered capabilities made available directly in the integrated development environment (IDE).

Big Blue says it is capable of providing context-aware RPG code explanations. Later enhancements are expected to include code generation, unit test case creation, and transformation functionalities. IBM watsonx Code for Assistant for i is being built on an  IBM Granite code model that is fine-tuned for RPG and IBM i, and is currently in private preview.

watsonx Orchestrate

Customers will be able to build AI agents in watsonx Orchestrate that work with 80+ business applications from providers like Adobe, AWS, Microsoft, Oracle, Salesforce Agentforce, SAP, ServiceNow, and Workday.. 

The watsonx Orchestrate portfolio includes:

  • Build-your-own-agent in under five minutes, with tooling that makes it easier to integrate, customize and deploy agents built on any framework – from no-code to pro-code tools for any kind of user.
  • Pre-built domain agents specialized in areas like HR, sales and procurement – with utility agents for simpler actions like web research and calculations.
  • IBM watsonx HR agents, available today, helping automate workflows for employee support, such as time off management, profile updates, leave and benefits – integrating with popular HR systems and human capital management applications.
  • IBM watsonx Procurement agents designed to streamline procurement workflows such as procure to pay, supplier assessment, and vendor management processes, integrating with tools like Sirion and Dun & Bradstreet.
  • IBM watsonx Sales agents built to automate sales processes, help identify new prospects, support outreach to qualified leads, and optimize research and enablement–connecting to technologies from Salesforce, Seismic, and Dun & Bradstreet.2
  • Agent orchestration to handle the multi-agent, multi-tool coordination needed to tackle complex projects like planning workflows and routing tasks to the right AI tools across vendors.
  • Agent observability for performance monitoring, guardrails, model optimization, and governance across the entire agent lifecycle.
  • watsonx is now integrated as an API provider within Meta’s Llama Stack, enhancing enterprises’ ability to deploy generative AI at scale and with openness at the core.
  • IBM recently announced its intent to acquire DataStax, enabling the harnessing unstructured data for generative AI. With DataStax, clients can also access additional vector search capabilities.

More info here.

A new Agent Catalog in watsonx Orchestrate will simplify access to 150+ agents and pre-built tools from both IBM and its partners, which includes Box, MasterCard, Oracle, Salesforce, ServiceNow, Symplistic.ai, 11x and others. The catalog will include a sales agent for discovering and importing prospects that works with and is available in Salesforce’s Agentforce and a conversational HR agent that can be embedded in Slack.

webMethods Hybrid Integration

IBM is introducing webMethods Hybrid Integrationto replace what it calls rigid workflows with intelligent, agent-driven automation. It should help users manage the sprawl of integrations across apps, APIs, B2B partners, events, gateways, and file transfers in hybrid cloud environments.

The product provides development, deployment, management and monitoring of diverse integration patterns across on-premises and multi-cloud environments through a hybrid control plane. IBM says it bridges the gap between existing investments and next-gen integration technology, whether accessing data inside mainframes and simplifying B2B data exchange or leveraging AI agents with IBM watsonx.

It uses agentic AI to enhance integration use cases, laying the foundations for model context protocol (MCP) and agent control protocol (ACP).

An included Integration Agent leverages IBM watsonx-powered AI and automatically generates integrations across language-based SDKs, APIs and events.

CAS and GPUs, Red Hat and db2

A new IBM content-aware storage (CAS) capability provides ongoing contextual processing of unstructured data to make extracted information easily available to RAG applications for faster-time-to-inferencing. It is available as a service on IBM Fusion, with support for IBM Storage Scale coming in the third quarter. Fusion is IBM’s containerized derivative of Spectrum Scale plus Spectrum Protect data protection.

Big AI-infused Blue has also expanded its GPU, accelerator and storage collaborations with AMD, CoreWeave, Intel, and Nvidia to provide choices for compute-intensive workloads and AI-enhanced data.

The AMD Instinct MI300X GPU is now generally available on IBM Cloud, with integration planned across the watsonx platform and Red Hat AI platforms. IBM Fusion HCI adds support for AMD Instinct MI210 GPUs. 

IBM is also using Nvidia GB200 NVL72 rack-scale systems to develop its Granite family of enterprise-grade foundation models, and recently announced support for Nvidia’s H200 GPUs. It also offers Intel Gaudi 3 AI accelerators.

There is a new IBM API Connect API Agent to help develop AI agents faster. It has been trained on the API Connect platform APIs and has access to a customer’s enterprise resource catalog. It responds to requests given in plain English, using its knowledge of software engineering, API best practices, API Connect capabilities and the customer’s API estate to develop AI agents.

Red Hat AI InstructLab is now generally available as a service on IBM Cloud, helping businesses build and deploy custom GenAI models, while using private data, without needing software installations or GPU management. IBM says customers can build more efficient models tailored to their unique needs while retaining control of their data.

There is a Db2 Intelligence Center, an AI-powered database management platform designed specifically for Db2 database administrators and IT professionals managing databases. More info here.

IBM has launched Db2 and Db2 Warehouse SaaS on Azure using the Bring Your Own Cloud (BYOC) model, that builds upon the existing Db2 and Db2 Warehouse on Cloud. This will be generally available on 17 June 2025. More data here.

Db2 12.1.2 has new AI and cloud-related features, such as built-in support for vectorized data, so supporting semantic searching. Db2 12.1.2 has support for CRUD (create, read, update and delete) operations on Apache Iceberg data lake tables. It’s remote storage capability now includes support for Azure BLOB storage.  GA of Db2 version 12.1.2 is set for 5 June 2025

There is an IBM Granite 4.0 Tiny Preview for the open source community, a preliminary version of the smallest model in the upcoming Granite 4.0 family of language models. Granite 4.0 Tiny Preview is extremely compact and compute efficient: at FP8 precision, several concurrent sessions performing long context (128K) tasks can be run on consumer grade hardware, including GPUs commonly available for under $350. More here.

IBM has announced the establishment of a new Microsoft Practice within IBM Consulting. This builds upon a multi-year partnership aimed to deliver stronger and more measurable business outcomes for clients who are navigating complex AI, cloud, and security transformations. 

Partnering 

There is a new planned integration between Amazon Q index and IBM watsonx Orchestrate. The Amazon Q index enables customers to create a central repository of data based on multiple third-party applications. It helps to fuel AI decision-making and serves as a foundation for retrieving content across various enterprise data sources. Through integration with Amazon Q index, customers will have the ability to enable any IBM watsonx agent to use domain-specific data from applications such as Adobe, Salesforce, Slack and Zendesk for more personalized experiences. A preview of this new integration is on display at IBM Think with expected availability in second half of this year.

IBM is working with Oracle to bring  watsonx to Oracle Cloud Infrastructure (OCI), using OCI’s native AI services.

Salesforce and IBM are providing customers access to their business data in Z mainframes and Db2 databases so it can be used for AI use cases on the Salesforce Agentforce platform.

IBM will introduce new IBM agents built with watsonx Orchestrate that work with Salesforce technologies, and IBM Granite models. The new integration will enable AI agents to complete back-to-front office connections using data from IBM Z mainframes, using Salesforce Zero Copy via an integration between IBM watsonx.data and Salesforce Data Cloud, a hyperscale data engine natively integrated within the Salesforce Platform, making high speed data transfer possible without having to move or copy data.  This is expected to be available in June.

Box, which provides an Intelligent Content Management (ICM) platform, and IBM are partnering to help customers have faster adoption of enterprise-level AI for content-driven workflows by using IBM watsonx and Box AI. Box is using IBM watsonx.governance internally for life-cycle management of AI models to monitor, govern and provide guardrails for various regulations. More here.

Comment

IBM is going full-tilt into the GenAI world, encouraging its customers to adopt LLM technology, RAG and AI agents to make their operations more efficient. The data in customer’s IBM data stores will be fed up stack through AI pipelines to LLMs and agents all orchestrated through its watsonx facilities. With wall-to-wall AI offerings it does not want to leave any gaps in its offerings through which aggressive competitors could penetrate its customer base. 

Nutanix marries cloud-native infra with Pure Storage and agentic AI

Nutanix is becoming cloud-native and hypervisor-independent, supporting external storage and embracing generative AI as it aims to provide a generalized software platform on which you can run anything, anywhere.

At its .NEXT 2025 conference, Nutanix is announcing cloud-native AOS, general availability of its Dell PowerFlex support, integration with Pure Storage FlashArray and FlashStack offerings, and a Nutanix Enterprise AI initiative with Nvidia and agentic embedding.

Lee Caswell, Nutanix
Lee Caswell

Lee Caswell, Nutanix SVP for product and solutions marketing, said: “The key themes fall into three categories. The first is about moving to a modern infrastructure, [then] the idea that you can build apps and run anywhere [and] supporting agentic workloads going forward.”

AOS is Nutanix’s Acropolis Operating System for hyperconverged infrastructure (HCI). There is a built-in hypervisor – AHV – to support guest operating systems and applications, and a virtual SAN aggregating the server node’s SSD and/or disk storage into a single pool of block storage. Flow provides virtual networking and security. AHV is actually separate from AOS and can be replaced by another hypervisor, such as VMware’s vSphere. The whole Nutanix software stack is called Nutanix Cloud Infrastructure (NCI).

The AOS virtual SAN can be replaced by an external storage system for users who need to separately grow their storage capacity from their server compute, with the first such storage product being Dell’s PowerFlex. This was first mentioned by Nutanix in June last year as part of efforts to provide an alternative destination following Broadcom’s changes to VMware’s business terms. 

Dell has two HCI systems of its own. The VxRail offering supports VMware vSphere and vSAN, while PowerFlex, which can theoretically grow to thousands of nodes, supports other hypervisors such as KVM (the open source Kernel-based Virtual Machine). Nutanix’s AOS is based on KVM.

AOS PowerFlex integration has been developed by Dell and Nutanix and is now generally available so that Nutanix AOS customers can use Dell PowerFlex external storage. A cluster of Nutanix compute nodes can scale to 32 nodes and hook up to a PowerFlex cluster that can scale to 128 nodes. Snapshot protection and replication of data in the PowerFlex is initiated by Nutanix’s Prism management software.

Pure Storage

The same kind of arrangement has been extended to Pure Storage’s FlashArray system with an NVMe/TCP interconnection (which PowerFlex also supports). Here the Pure FlashArray can scale to ten storage arrays and 20 controllers. Nutanix AOS with Pure Storage will be supported by server hardware partners that currently support FlashArray, including Cisco, Dell, HPE, Lenovo, and Supermicro. 

Caswell said: “Pure has … 13,500 customers. And so this is a really interesting opportunity to go and see how we can scale into new storage buyers. I think it’s a real evidence of the maturity of the Nutanix vision that HCI and external storage will coexist for years to come.”

FlashArray provides both compression and deduplication. PowerFlex is compression-only and, with 150 TB Direct Flash Modules (solid state drives) and 300 TB ones coming, has better density than other systems using off-the-shelf SSDs, which max out at the 128 TB level. It supports asynchronous replication now with a roadmap toward ActiveDR and ActiveCluster (metro-stretch cluster), enabling near-zero RPO/RTO recovery for Nutanix deployments.

Nutanix compute cluster

Nutanix is working with Cisco “to make sure that Nutanix is integrated and supported with the FlashStack offering,” Caswell says. FlashStack is a joint Cisco-Pure converged infrastructure (CI) offering combining Cisco’s UCS servers and Nexus networking switches with Pure’s storage, for sale by channel partners.

Caswell said: “Cisco is validating the Nutanix solution in an offer that’s called FlashStack with Nutanix,” and will run Nutanix AOS software on the servers with Pure’s FlashArray storage.

Jeremy Foster, SVP and General Manager at Cisco Compute, said: “With nearly a decade of joint innovation with Pure Storage, and an expanded partnership and co-development roadmap with Nutanix, we’re offering a proven platform backed by Cisco validated designs, a world-class joint support model, and deep integration with Cisco Intersight – providing unified visibility across both Pure Storage and Nutanix clusters for a more complete view of the operating environment.” 

Cloud-native AOS

Nutanix’s Cloud Native AOS product extends Nutanix storage and data services to hyperscaler Kubernetes services and cloud-native bare-metal environments, without requiring a hypervisor. The containerized AOS concept means AOS is cloud-native and will be able to run in the public cloud – AWS, Azure, GCP – and also on-premises on any bare-metal Linux server. The Nutanix Kubernetes Platform (NKP) can run Kubernetes-orchestrated container workloads through the acquired D2iQ Kubernetes Platform (DKP) software, and Nutanix says it can run containerized apps at the edge, in the datacenter, and in the public cloud:

Nutanix data services

There are a set of Nutanix data services for Kubernetes (NDK) that include app-centric snapshots, replication, and disaster recovery across availability zones in the public cloud, for example. Customers can build and deploy cloud-native apps, with app and data migration across sites, including repatriation, the ability to move applications back to on-prem containerized environments.

Caswell said: “We can push further into the cloud by running directly as a containerized AOS directly on a Kubernetes runtime. In this case we’re doing a EA (early access) of EKS.” This is Amazon’s Elastic Kubernetes Service.

“You’ve got the opportunity to run without our hypervisor” as “most clouds, including Amazon, have an underlying hypervisor they’re running their Kubernetes runtime on … They use their virtualization for managing the infrastructure. Now we have our Kubernetes and our distributed storage services that give you enterprise value in the public cloud.”

He thinks that edge IT infrastructure is evolving. “It’s our view that over five years, say, the edge is actually going to go and be like smaller instances of bare metal. You may have container-only versions doing AI inferencing, for example, and so the idea [is] to run cloud-native AOS with our data services at the edge,” with a hypervisor optionally present.

So customers at the edge could run containerized apps in virtual machines (VMs) inside a hypervisor environment or run the containers directly in a Linux bare metal server, or even run VMs inside containers.

The cloud-native AOS idea was initiated in Nutanix’s Project Beacon. Caswell said: ”We announced Project Beacon … three years ago with a concept that you could run any app anywhere. And this is the first product out of that Project Beacon vision … This basically now is the underpinnings of how we’ll provide all of our PaaS-level services. We’ll be able to run independent of whether there’s a hypervisor or just a Kubernetes runtime available.”

A cluster of containerized AOS instances is called a Nutanix Cloud Cluster (NC2) and NC2 can also run both on-premises and in the public cloud where it runs the Nutanix HCI software stack on server bare-metal instances. Both the AWS and Azure clouds have been supported and Nutanix is now adding support for Google Cloud’s bare-metal instances.

Nutanix is also supporting AWS’s new Intel i7 Sapphire Rapids CPU instances with AMX extensions for AI inferencing and RAG workloads.

Nutanix Enterprise AI (NAI)

Nutanix has looked at the surging interest in generative AI and has developed its own enterprise AI vision, NAI, which views customers as needing a whole AI system and not just component parts, helps customers start AI operations, and keeps them secure. This is a development building on its GPT-in-a-Box concept. The NAI core includes a centralized LLM repository that Nutanix says “creates secure endpoints that make connecting generative AI applications and agents simple and private.”

The Nutanix Cloud Platform offering builds on Nvidia’s AI Data Platform reference design and integrates Nutanix Unified Storage and Nutanix Database Service offerings for unstructured and structured data for AI. Caswell said Nutanix will provide reasoning, embedding, reranking, and guardrail models, and run and manage agentic AI, all on top of its Nutanix Kubernetes Platform. 

Nutanix Enterprise AI

Nutanix does not see its software being used for AI large language model (LLM) training because it’s a small market. Caswell said: “Basically imagine these LLMs are being developed in the public cloud. Most customers have figured that out … There’s only a hundred companies that have the resources to go build LLMs, in my opinion, because of the GPUs that are required.”

Caswell said that what customers are really thinking about is how to make these models more effective. Nutanix is helping here because “we’re now supporting the new Nvidia models. These are called NeMo and NIM … we actually qualify LLMs.”

“What does it mean to certify or qualify an LLM? It’s actually back to what we’ve wrestled with for years. It’s memory management. You need to make sure that the model running doesn’t overrun the memory limitations of an individual GPU. So we certify these LLMs … we certify them today from Nvidia and from Hugging face. These certified LLMs are private and public, including LLaMA 2, LLaMA 3, and others.”

“Those are now certified in the Nvidia case. They’re performance-optimized for the Nvidia GPUs. We support the range of GPUs underneath. That certification now is extending to make sure that we can add model support.”

One supported model is Guardrail. “Once you have your initial result, you now feed it into a Guardrail model that basically matches this against any ethical concerns you have, any things you want to weed out effectively … that would be inflammatory.” Another is Reranking to make a list of items ordered for customer relevancy. Caswell said Embedding adds audio, images, and video to text items.

He said NAI will support an agentic workflow cycle with plan, use, and critique phases. Nutanix is working with Nvidia “to basically bring in all of these models into a production grade workflow … What enterprise AI does is it gives you access to these models and a workflow. We have integrated our endpoints with each one of these models. And you can now build out a production system more quickly because of the joint collaboration across the two companies,” Nutanix and Nvidia.

Caswell said Nutanix’s run anything, anywhere concept means customers can use a Nutanix Cloud Platform system to run GenAI models, cloud-native apps, virtualized apps, databases, and desktops wherever it’s appropriate; AI Factories, datacenters, major and minor public clouds, and edge locations. Nutanix is like an over-arching environment covering all these bases and providing a distributed platform for the applications. It has become a common data platform that can run across bare-metal, virtualized, and containerized infrastructure.

Availability

Cloud-native AOS supports Amazon EKS (Elastic Kubernetes Service) at early access level now with general availability in the summer. It will support on-premises bare-metal servers in 2026 with early access starting at the end of this year. The Pure Storage FlashArray integration is at early access availability for now, with GA expected later this year. Nutanix integration with FlashStack is expected to be available for early access this summer and generally available at the end of the year. NAI with agentic model support is now generally available. Nutanix blogs provide more information.

OpenSearch 3.0 targets fast AI search with MCP and GPU-powered vectors

OpenSearch 3.0 accelerates vector database performance and has Model Context Protocol (MCP) support for AI agent interactions.

This version of OpenSearch is claimed to deliver a 9.5x performance improvement over OpenSearch 2.17.1, which itself “was 1.6x faster than its closest industry competitor” – which was Elasticsearch 8.15.4. If both vendor claims are accurate, OpenSearch 3.0 would theoretically be 15.2 times faster than Elasticsearch 8.15.4. But the current 9.0.0 Elasticsearch version was released on April 15 this year, and, with Better Binary Quantization (BBQ), is claimed to be up to 5x faster than OpenSearch with its FAISS (Facebook AI Similarity Search) library. We would assume that, absent formal benchmark tests, Elasticsearch 9.0 and OpenSearch 3.0 have approximately the same performance.

Carl Meadows, OpenSearch
Carl Meadows

Carl Meadows, Governing Board Chair at the OpenSearch Software Foundation and Director of Product Management at AWS, stated: “The enterprise search market is skyrocketing in tandem with the acceleration of AI, and it is projected to reach $8.9 billion by 2030. OpenSearch 3.0 is a powerful step forward in our mission to support the community with an open, scalable platform built for the future of search and analytics, and it reflects our commitment to open collaboration and innovation that drives real-world impact.” 

For context, Elasticsearch is an open source, distributed analytics engine that appeared in 2010 based on Apache Lucene search software. It is the world’s most-used vector database, according to Elastic. In January 2012, Elastic changed its Apache 2.0 source license to a dual license structure based on the restrictive SSPL (Server Side Public License) and an Elastic License to discourage major cloud service providers from using its software without contributing to the community or buying support. Consequently, AWS forked Elasticsearch (7.10.2) and developed its own OpenSearch software based on Elasticsearch, along with OpenSearch Dashboards as a fork from the Kibana open source data visualization and exploration software.

Kibana is part of the ELK (Elasticsearch, Logstash, Kibana) stack, a set of Elastic software for collecting, processing, and visualizing data. It was developed at Elastic by Rashid Khan.

OpenSearch, supported by the OpenSearch Software Foundation, has an Apache v2.0 license and its code-contributing community includes Logz.io, Red Hat, SAP, and Uber. v3.0 features include:

  • GPU-based acceleration, leveraging Nvidia cuVS for indexing workflows, for its Vector Engine with this “experimental feature” speeding index builds by up to 9.3x and accelerating data-intensive workload performance.
  • Native MCP support to enable AI agents to integrate with OpenSearch.
  • Derived Source capability, which reduces storage capacity one-third by removing redundant vector data sources and utilizing primary data to recreate source documents as needed for reindexing or source callback. 

AI inferencing performance will depend heavily on vector search times and all the AI search and vector database suppliers and open source coders are in a race to provide the fastest vector search times they can.

OpenSearch data management additions include gRPC support, pull-based ingestion, reader and writer separation, index type detection to speed up log analysis, and Apache Calcite integration. Calcite is an open source framework for building databases and data management systems. Google Remote Procedure Call (gRPC) is an open source, cross-platform, high-performance remote procedure call framework, developed to connect microservices in Google’s data centers.

OpenSearch 3.0 uses Apache Lucene 10.0, has updated its minimum supported runtime to Java 21, added support for the Java Platform Module System, and is now available.

Bootnote

Elasticsearch and OpenSearch competitors include Algolia, Meilisearch, OpenObserve, Apache Solr, Typesense, and many others.

Twist Bioscience launches Atlas to bring DNA storage to market

Twist Bioscience has spun off its DNA storage business as Atlas Data Storage, a commercializing startup led by Varun Mehta, co-founder and CEO of HPE-acquired Nimble Storage.

Varun Mehta

The DNA fragmentation and data writing capabilities developed by Twist enable DNA-based archival data storage. This encodes binary data (base-2 numbering scheme) into a four-element coding scheme using the four DNA nucleic acid bases: adenine (A), guanine (G), cytosine, (C) and thymine (T). For example, 00 = A, 01 = C, 10 = G and 11 = T. The transformed data is encoded into short DNA fragments and packed inside a container, such as a glass bead, for 1,000-year preservation and subsequent reading in a DNA sequencing operation.

Atlas Data Storage has closed a $155 million seed financing round and licensed DNA storage assets from Twist, using them in its aim to develop end-to-end DNA storage. The core technology combines new “semiconductor chips and enzyme engineering, ushering in a new era of high-throughput and massively parallel chemistry performed on a chip.” It claims its “datacenter products will equip hyperscaler, enterprise, and government customers to meet the data storage demands of the AI era with low cost, ultra-high density, secure, scalable storage. DNA data storage will be designed to enable greener datacenters with permanent storage that requires no ongoing migration or rewriting, reduced power demands, and minimized carbon impact and e-waste.”

Bill Banyai

Mehta stated: “The opportunity to create an entirely new storage medium does not arise often. At Atlas Data Storage, we are pioneering the use of DNA for high-capacity storage. DNA enables highly scalable, ultra-dense, secure, permanent data storage, and the potential to reshape storage is tremendous. Atlas has the right team and technology to realize this promise.

“By operating as an independent company, Atlas is able to focus solely on bringing DNA data storage from technology to commercialization and invest in all the functions needed to enter the commercial market. With technology licensed from Twist and an initial close, Atlas is well positioned to drive toward commercialization.”

The seed financing investors are ARCH Venture Partners, Deerfield Management, Bezos Expeditions, Tao Capital Partners, Rsquared VC, Earth Foundry, In-Q-Tel (IQT), and other undisclosed investors.

George Kadifa

Atlas’s exec chairman is George Kadifa, co-founder and managing director of Sumeru Equity Partners, who has a background that includes roles at Hewlett-Packard, IBM, Oracle, and Silver Lake. Emily Leproust, CEO and co-founder of Twist, is also on the board, as Twist will retain a minority ownership of Atlas. Bill Banyai, Twist co-founder and general manager of DNA data storage, becomes Atlas’s CTO.

Twist has developed a proprietary semiconductor-based synthetic DNA manufacturing process featuring a high-throughput silicon platform that allows it to miniaturize the chemistry necessary for DNA synthesis. It has assigned and licensed its DNA data storage technology to Atlas in exchange for a minority ownership interest upon close, an upfront cash payment, and a secured promissory note. It may participate in the upside of DNA data storage through future technology and commercial milestone payments, and a revenue share through royalties on future sales of Atlas’s products and services. Twist and Atlas will operate independently with separate management teams.

Leproust said: “There are many applications of synthetic DNA with the potential to have an incredible impact on the world. With this spin-out, both Twist and Atlas are able to move with full force in growing those applications.

Emily Leproust

“With this transaction and a pure-play DNA data storage focused business, Atlas receives the investment needed to accelerate technology development and drive toward early access customer engagement while Twist continues to share in the upside opportunity for DNA data storage. This transaction also allows Twist to focus on continued revenue growth and our objective of achieving adjusted EBITDA breakeven and profitability. With the launch of Atlas, we now expect to achieve adjusted EBITDA breakeven by the end of fiscal 2026.”

Kadifa said: ”New technologies such as artificial intelligence are further accelerating demand for storage. I’m confident that the data storage technology that Atlas is creating will enable storing billions and billions of terabytes at low cost, power, and waste. Atlas is also driving US leadership in key technology domains, which will have an immense long-term economic and national security impact.”

Mehta has written a blog about product-market fit failures (PMF), including a company he ran, cloud database firm Sneller. He writes that the key “lessons” are:

  • PMF requires more than a good product. You’ll need a deep understanding of your market (including competitors and potential disruptors) and your customers.
  • Technical founders often jump into building without adequate market research, which can lead to products that don’t meet a true market need.
  • Engaging with a broad and representative customer base — starting even before building a product — is essential to avoid the false positives that can lead you down the wrong track with your product development.
  • To remain competitive in an existing market, you’ll need to be 10 times better than your competition.

Existing DNA storage companies include Biomemory in France and US-based Catalog Technology. Although DNA storage can hold vast amounts of data in a tiny physical space for a very long time, it has long data writing and reading times compared to, for example, Cerabyte’s ceramic-coated, glass tablet-based technology. No DNA storage technology has yet achieved widespread commercial adoption.

NetApp and Intel’s AIPod Mini for departmental inferencing

NetApp has added a lower cost AIPod Mini to its AIPod line of ONTAP AI systems, which provide a compute and storage foundation for departmental and team-level enerative AI workload projects.

The AIPod line started with the AIPod with Nvidia, a certified Nvidia BasePOD system, using the GPU maker’s DGX H100 GPU server attached to NetApp’s AFF C-Series capacity flash systems. These then had faster A-Series flash arrays supported and there was a Lenovo AIPod version, with Nvidia OVX, built for GenAI fine-tuning, inferencing, retrieval-augmented generation (RAG), deploying customized chatbots, copilots, and other GenAI apps. ONTAP gained a directly integrated AI data pipeline, with automated vector embedding creation, last September. In March, NetApp’s AIPod achieved the Nvidia-Certified Storage designation to support Nvidia Enterprise Reference Architectures with high-performance storage. Now we have the Intel-oriented AIPod Mini for departmental GenAI inferencing workloads.

Dallas Olson

NetApp’s Chief Commercial Officer, Dallas Olson, stated: “Our mission is to unlock AI for every team at every level without the traditional barriers of complexity or cost.  NetApp AIPod Mini with Intel gives our customers a solution that not only transforms how teams can use AI but also makes it easy to customize, deploy, and maintain. We are turning proprietary enterprise data into powerful business outcomes.”

The company says  AIPod Mini “enables businesses to interact directly with their business data through pre-packaged Retrieval-Augmented Generation (RAG) workflows, combining generative AI with proprietary information to streamline the deployment and use of AI for specific applications, such as:

  • Automating aspects of document drafting and research for legal teams, 
  • Implementing personalized shopping experiences and dynamic pricing for retail teams, 
  • Optimizing predictive maintenance and supply chains for manufacturing units.

AIPod Mini is designed to be simpler to use and lower cost than, we understand, full-scale Nvidia GPU environments. It is designed for departmental or business-unit budgets, and to be scalable with a low entry price. There will be  a pre-validated reference design which, with its RAG workflows, will “enable quick setup, seamless integration, and customization without extra overhead.”

Greg Ernst

Intel’s Greg Ernst, Americas Corporate VP and GM, said: “By combining Intel Xeon processors with NetApp’s robust data management and storage capabilities, the NetApp AIPod Mini solution offers business units the chance to deploy AI in tackling their unique challenges. This solution empowers users to harness AI without the burden of oversized infrastructure or unnecessary technical complexity.”

The AIPod Mini combines Intel processors with NetApp’s all-flash ONTAP storage and is built on an Open Platform for Enterprise AI (OPEA) framework.

The CPU is an Intel Xeon 6 with its 2-core architecture utilizing Performance or P-cores (Granite Rapids) including Advanced Matrix Extensions (AMX) and Efficient or E-cores (Sierra Forest)  The P-cores support Gen AI large language model (LLM) workloads.

OPEA was set up in April last year as an Intel open source sandbox project under the LF AI & Data Foundation to create a standardized, multi-provider, and composable framework for developing and deploying AI applications, supporting RAG  and with modular microservices and architectural blueprints. LF stands for the Linux Foundation. To an extent, it competes with Nvidia’s Gen AI ecosystem and, of course, its reference implementations are optimized for Intel hardware. There are no other storage array system suppliers other than NetApp and Nutanix in the OPEA. We won’t be seeing OPEA-style AIPod Mini systems from NetApp’s competitors, unless they join the organization.

NetApp AIPod Mini with Intel will be available in this summer of 2025 from certain NetApp global channel members. Initial launch partners will include two distributors – Arrow Electronics and TD SYNNEX – and five integration partners: Insight Partners, CDW USA, CDW UK&I, Presidio, and, lastly, Long View Systems, which will provide dedicated support and service.

We have no specific configuration details. Find out more here.

Huawei unveils full-stack AI data lake platform

Huawei has developed its own data lake software as part of a full-stack approach to providing AI data storage and pipelining for AI training and inference workloads.

This was presented at Huawei’s IDI (Innovative Data Infrastructure) Forum in Munich last month by Dr Peter Zhou, President of its Data Storage Product Line, and other Huawei speakers. The foundation is based on three Huawei storage systems: OceanStor A Series for fast access data, OceanStor Pacific for nearline data, with dynamic tiering between the two, and OceanProtect for backing up data from the Pacific system.

The OceanProtect E8000 can store up to 16 PB of data with a 255 TB/hour throughput. We’re told the backup function protects A-Series data by directly backing it up to OceanProtect systems. Huawei’s AI entropy analysis function is claimed to have a 99.99 percent ransomware backup attack detection rate.

There are two software layers above this storage array foundation: a data management layer and an AI tool chain layer.

Huawei slide
Yellow items are Huawei’s own products

The data management layer is occupied by Data Management Engine (DME) functions. It provides a single and central management interface to Huawei storage, third-party storage, switches, and hosts using APIs. There are three Huawei DME software products here: DME Omni-Dataverse, DME IQ, and eDataInsight.

The system functions in this layer include Huawei’s data warehouse, a vector database, data catalog, data lineage, version management, and access control.

The Omni-Dataverse is a global file system and data management framework designed to eliminate data silos across geographically dispersed datacenters by providing a single data namespace. This enables the presentation of a unified virtual data repository – a warehouse or data lake – covering multiple geographically dispersed and separate silos on premises or in the public cloud or a hybrid on-prem/cloud setup.

It provides the means for data to be ingested, indexed, processed, curated, and made available for AI training and inference sessions and other data-using applications. Huawei says the system can rapidly index and/or retrieve exabyte-scale datasets, being capable of processing over 100 billion files in seconds using more than 15 search criteria.

Huawei slide

In general, Omni-Dataverse provides data retrieval, lineage, versioning, and access control capabilities. It includes dynamic tiering between the A-Series and Pacific arrays. The vector database aspect is in development and Huawei may partner with a third-party supplier for this functionality. 

As data ages and some of it falls into disuse or expires, then the software can verify data lineage and delete obsolete items.

DME IQ is a cloud operations and maintenance platform using big data analytics and AIOps to provide automated fault reporting and real-time problem tracking.

The top AI tool chain layer in Huawei’s AI stack is for making the datasets from the data lake data available for processing by various hardware engines through pipelines and third-party toolsets such as LangChain. There are Huawei iData and ModelEngine components here, with iData data ingestion and enablement facilitation, along with both model and application enablement.

Huawei says the Model Engine provides an end-to-end AI tool chain to deliver and schedule jobs across both dedicated and shared pools of CPUs, NPUs, and GPUs. Huawei supports the GPUDirect protocol for files and is working on support for the GPUDirect Object protocol.

Huawei slide

The DCS (Data Center Solution) is a datacenter virtualization concept integrating computing, storage, networking, and management. A core virtualization platform within it is eSphere, which provides the virtualization layer. It uses Omni-Dataverse to access a unified global namespace and so operate on data sets within it.

The eContainer function is, roughly, the containerization equivalent, integrating with Huawei’s Kubernetes-based Cloud Container Engine (CCE).

The resource management side of Huawei’s data lake stack provides xPU scheduling, multi-tenancy, and an AI Copilot. There is an AI-powered DataMaster component here, integrating the AI Copilot, to enhance its O&M capabilities. The AI Copilot provides intelligent Q&A via natural language queries, automated guidance for troubleshooting and maintenance tasks, and proactive system health checks.

Comment

Huawei has devised the basics of a full AI stack to support AI training and inference workloads. The only other suppliers with a similar storage hardware to unified data lake to pipelining and model use concept are Dell, with its AI Factory, and VAST Data. Other suppliers such as DDN, Hammerspace, Pure Storage, and WEKA are all building out their AI stacks, with HPE, IBM, and NetApp doing so as well.

We think that Dell, VAST, and the others will have an unimpeded run in the US market due to restrictions on Huawei, but elsewhere they might find Huawei a formidable competitor. It will be able to provide upgrade-to-DME AI stack messages to all of its existing customers as well as prospect greenfield sites, and capitalize on any anti-Trump sentiment out there. There is also a cloud service provision aspect to this, but we’ll look at that another day.

Storage news ticker – May 2

Screenshot

Cerabyte is participating in 2025 OCP EMEA Regional Summit, taking place April 29–30 in Dublin, Ireland. It will be putting its ceramic-on-glass storage to the test – by boiling it in salt water, and then baking it before fully recovering the data on it. See a video here.

Commvault has disclosed a flaw, tracked under CVE-2025-3928, being an unspecified security problem that authenticated attackers can exploit remotely to plant webshells on target servers. Commvault web servers are user-facing and API components of a backup system used by enterprises to protect and restore critical data. According to a Bleeping Computer report, the flaw is under active exploitation in the wild. CVE-2025-3928 was fixed in versions 11.36.46, 11.32.89, 11.28.141, and 11.20.217 for Windows and Linux platforms.

Dan Beer

Dan Beer resigned from the StorMagic CEO slot, with Susan Odle replacing him there, to join CrashPlan as its CEO, and CrashPlan has just announced a unification of its data protection and cyber-resilience products in a  combined platform. This provides a a Microsoft Azure-Centric cyber-resilience soffering for data protection and governance from a single SaaS platform. 

CrashPlan is a data resiliency SaaS platform that solves ransomware recovery, device migration, legal hold, and disaster recovery customer challenges. CrashPlan protects data where each unique organization needs it to be stored – endpoints, servers, Microsoft 365, or Google Workspace – while allowing customers to unlock value in their data. CrashPlan is a trusted provider for over 50,000 organizations worldwide, including major enterprises and universities. 

CrashPlan originally launched in 2007, and was acquired by Mill Point Capital in 2022. Mill Point Capital is a private equity firm focused on control investments in lower-middle market companies across the Business Services, Industrials and IT Services sectors throughout North America, with over $3 billion of cumulative capital commitments.

Data integration supplier Fivetran is acquiring reverse ETL company Census. It says the deal makes Fivetran the only fully managed platform that enables enterprises to move governed, automated and real-time data across their entire stack – from source systems to data platforms back into the business applications that drive decision-making. Founded in 2018 and headquartered in San Francisco, Census has grown to serve hundreds of customers across industries and currently employs over 200 people. The Census team will join Fivetran as part of the acquisition, and co-founder and CEO Boris Jabes will join to help lead the company’s data activation strategy moving forward.

We understand that both Fivetran and Census are Andreessen Horowitz-backed companies. Fivetran raised $44 million in a Series B round in 2019 and $565 million in a Series D round in 2021, both led by Andreessen Horowitz. Census raised $4.3 million in a seed round in 2020 and a Series A in 2021, with Andreessen Horowitz leading the seed round and participating in the Series A.

GigaIO composes CPUs, GPUs, and other accelerators with its FabreX memory fabric software. it’s partnering d-Matrix to integrate d-Matrix’s Corsair inference platform into GigaIO’s SuperNODE architecture to eliminate the complexity and performance bottlenecks traditionally associated with large-scale AI inference deployment. Highlights include processing capability of 30,000 tokens per second at just 2 milliseconds per token for models like Llama3 70B. Plus up to 10x faster interactive speed compared with GPU-based systems, 3x better performance at a similar total cost of ownership and 3x greater energy efficiency for more sustainable AI deployments.

Tom Whaley

Hammerspace has appointed Tom Whaley as VP Americas Sales. He joins from WEKA, where he was West Area Sales Director,  with an extensive sales leadership history at VAST Data, mParticle and NetApp. Whaley will report to Hammerspace CRO Jeff Gianetti who also joined from WEKA in January.

HighPoint has introduced its Rocket 7604A, a 4x M.2 Gen5 x16 NVMe RAID AIC, designed for compact computing environments. It’s a half-length PCIe Gen5 x16 4-port M.2 NVMe RAID AIC with 167mm x 110mm dimensions and FH-HL form factor. The card has up to 32TB of storage capacity via a single PCIe slot, with four independent M.2 ports, delivering throughput of 64 GBps and 12 million IOPS. Learn more here.

CRN reports two Hitachi Vantara Pentaho BA Server vulnerabilities were logged by the U.S. Cybersecurity and Infrastructure Security Agency in March in CISA’s Known Exploited Vulnerabilities Catalog based on evidence of active exploitation.


SaaS backup and data recovery experts Keepit has been named a champion in the Canalys Managed BDR Leadership Matrix 2025 for backup and disaster recovery.

Per Overgaard

Lenovo has promoted Per Overgaard from ISO CTO EMEA ISG to General Manager for ISG EMEA, Overgaard held senior roles at IBM and HP before joining Lenovo in  2015 as part of the System X acquisition from IBM. He will oversee operations in a region that benefits from local manufacturing capabilities – most notably, Lenovo’s factory in Hungary. 

MongoDB announced that Mike Berry will become the company’s new CFO effective May 27. Berry, the ex-CFO of NetApp,  has 30-plus years of experience in the technology and software industry, serving as CFO for seven different companies prior to joining MongoDB, including McAfee, SolarWinds, and Informatica. In August 2024, Berry announced his plans to retire from NetApp once a successor was named (which happened on March 10, 2025) and then said the opportunity to join a company the caliber of MongoDB was “incredibly compelling.” The hire follows the abrupt resignation of prior MongoDB CFO Serge Tanjga.

Mike Berry

Nvidia says it is bringing runtime cybersecurity to every AI factory with a new DOCA Argus software framework, part of its cybersecurity AI platform, and running on the BlueField DPU/SmartNIC. DOCA Argus operates on every node to detect and respond to attacks on AI workloads, integrating with enterprise security systems to deliver real-time threat insights.


RAID, Inc. has launched the DataEdge Transporter (“DET”) for high-capacity data transfers between edge locations. It uses FIPS-2 with TPM v2 media and has dual port 200GbE connectivity, over 350TB of flash storage per 2U rack and comes with a highly durable roll-away carrier to bring data from one location to the next. The out-of-the-box architecture delivers up to 8GB per second and is fully TAA compliant, making it ideal for the highest levels of data security and government missions.

Redis CEO Rowan Trollope blogs that MongoDB, Elastic and Redis all adopted SSPL to protect their business from cloud providers extracting value without reinvesting. But there was a downside, Trollope saying: “This achieved our goal—AWS and Google now maintain their own fork—but the change hurt our relationship with the Redis community. SSPL is not truly open source because the Open Source Initiative clarified it lacks the requisites to be an OSI-approved license.” So Redis is adding the OSI-approved AGPL as an additional licensing option for Redis, starting with Redis 8, which is now GA.

Redis 8 has support for vector sets, and integrates Redis Stack technologies, including JSON, Time Series, probabilistic data types, Redis Query Engine and more into core Redis 8 under AGPL. It provides over 30 performance improvements with up to 87 percent faster commands and 2x throughput. Trollope says it is: “Improving community engagement, particularly with client ecosystem contributions.”

Cloud-based real-time analytics company StarTree announced Model Context Protocol (MCP) support and vector embedding model hosting based on Amazon Bedrock. These capabilities enable StarTree to power agent-facing applications, real-time Retrieval-Augmented Generation (RAG), and conversational querying. StarTree also announced the general availability of Bring Your Own Kubernetes (BYOK), a new deployment option that gives organizations full control over StarTree’s high-performance analytics infrastructure within their own Kubernetes environments, whether in the cloud, on-premises, or in hybrid architectures.

Jitender Aswani

Trino open source distributed SQL supplier Starburst has appointed Jitender Aswani as SVP Engineering. Aswani will report directly to Starburst CEO and Co-Founder,  Justin Borgman, mad comes via stints at StarTree, Moveworks, Netflix, Facebook, SAP, CornerstoneResearch and HP. 

A VAST Data blog says its unified, AI-infused DASE architecture is great for cyber-resilience with its multi-modal, learning defense system. A second blog says DASE-using AI agents can do the same for compliance.

Weebit  Nano and tier-1 semiconductor foundry DB HiTek will show the first demonstration of DB HiTek’s Bipolar-CMOS-DMOS (BCD) silicon integrating Weebit’s Resistive Random-Access Memory (ReRAM) non-volatile memory (NVM) technology at PCIM  2025. PCIM is Europe’s largest power semiconductor exhibition, and will be held in Nuremberg, Germany, from May 6-8, 2025.

Western Digital said it would deliver 36TB-44TB HAMR disk drives by 2026 at an investor day. It will introduce HAMR technology around the 40TB capacity point, using its Optinand and UltraSMR technologies to get up that point with 11-platter drives, unlike Seagate which started transitioning to HAMR with 10-platter, 32TB drives. WD reckons it could reach 100TB capacity with its HAMR drivesby 2030.

Xconn Technologies demo’d dynamic memory allocation using CXL switch technology at CXL DevCon 2025. It used its Apollo CXL switch, the industry’s first to support both CXL 2.0 and PCIe 5.0 on a single chip. The switch enables terabyte-scale memory expansion with near-native latency and coherent memory access across CPUs, GPUs, and accelerators, including the 5th Gen AMD EPYC processors. Production samples of XConn Apollo XC50256 are available now.

The South China Morning Post reports a valuation of 161 billion yuan (US$22.1 billion) and losses incurred last year, have been revealed in a filing by a new investor in Yangtze Memory Technologies Co (YMTC), China’s leading flash memory chipmaker.

DDN storing data for Nebius AI’s GPU server farm

DDN’s Infinia and EXAScaler storage systems are being integrated into the Nebius AI Cloud to store data for training, inferencing, and real-time AI applications.

Nebius Group is a Nasdaq-listed company headquartered in Amsterdam and operating worldwide, with 850 AI engineers in R&D hubs across Europe, North America and Israel. Its core Nebius offering is an AI-centric cloud SW, server and storage platform providing a full-stack infrastructure for AI, including large-scale GPU clusters, cloud platforms and tools and services for developers. The word “Nebius” is a neologism combining Nebula and the never-ending Möbius Strip.

Other group entities includes Toloka, a generative AI development partner; TripleTen, an edtech organization re-skilling people for careers in tech; and Avride, a developer autonomous driving technology for self-driving cars and delivery robots.

Paul Bloch.

Paul Bloch, President and Co-Founder at DDN, stated: “To lead in AI, enterprises require infrastructure that delivers breakthrough speed, scalability, and seamless integration. By partnering with Nebius, we’re breaking down traditional barriers to AI adoption and delivering a next-generation cloud platform that transforms how AI is built and scaled worldwide.”

DDN says Nebius’s AI Cloud is getting:

  • AI-Optimized, SLA-Driven Performance – Infinia guarantees extreme reliability and speed, accelerating AI model training and deployment while EXAScaler delivers unmatched data throughput and I/O consistency.
  • Seamless Scaling Across Cloud and On-Prem Environments – Businesses can instantly scale workloads across Nebius’ AI cloud, hybrid setups, and air-gapped systems without bottlenecks using EXAScaler’s industry-leading parallel file system.
  • Ultra-Fast Data Processing for AI at Scale – Infinia and EXAScaler eliminate data latency challenges, ensuring AI pipelines operate at peak efficiency—even for multi-trillion-parameter models.
  • Enterprise-Ready AI Infrastructure – With DDN and Nebius, enterprises can deploy AI workloads faster, more efficiently, and at a lower cost than ever before.

VAST Data has signed up CoreWeave, Lambda, Fluidstack, and X’s Colossus AI-focused GPU server farms for its storage, and has a partnership with Nebius. DDN is also supplying storage to FluidStack and Colossus, and has now signed up Nebius as well. It and VAST are the two main GPU server farm storage suppliers.

Arkady Volozh

Nebius CEO Arkady Volozh said: “Our mission is to provide cutting-edge AI cloud infrastructure that enables innovation worldwide at speed and scale. With DDN, we’re able to help enterprises do this seamlessly and efficiently.”

Bootnote

Volozh is the cofounder of Russian Google analog Yandex, forming Yandex NV in 1989 as a holding company for Yandex. The outfit provided search, mapping and other Internet services, competing with Google in Russia and elsewhere. It was headquartered in Amsterdam and listed on Nasdaq. That listing was suspended when Russia invaded Ukraine in February 2022. It restructured in July and August 2024 with Yandex in Russia sold off and the non-Russian operations restructured into the Nebius Group. It relisted on Nasdaq at the end of 2024 and raised $700 million at that time.

Nebius Group has a datacenter in Mäntsälä, Finland, GPU Clusters in Paris and Kansas City, Missouri, and a 300MW datacenter being built in Vineland, New Jersey. It has achieved Reference Platform Cloud Partner status in Nvidia’s partner network and offers GB200 NVL72 and HGX B200 compute. Nvidia has invested in the Nebius Group and Volozh still runs it.

Nebus group revenue in the last calendar 2024 quarter, its Q4, was $37.9 million; 466 percent up year-on-year. Full 2024 year revenue was $117.5 million and its cash and cash equivalents as of December 31, 2024, stood at $2.45 billion.

It is guiding its March ARR to be at least $220 million and says its “projected December 2025 ARR of $750 million to $1 billion is well within reach.”