DDN unveils Inferno and xFusionAI with hybrid file-object tech

Analysis: DDN launched its Inferno fast object appliance and xFusionAI hybrid file+object technology at Nvidia’s GTC 2025. We examine both technologies to see what they offer.

From the AI training and inference point of view, DDN has two storage technologies. Its originally HPC-focused EXAScaler line presents a Lustre-based parallel file system running on NVMe SSDs in scale-out storage nodes with client software executing in the GPU server nodes. The newer Infinia technology provides a base key-value store on which data access protocols are layered, with the S3 object protocol first. File, block, and others will be added over time.

Infinia v2.0, released in February, is designed to eliminate AI bottlenecks, accelerate workflow, and scale for complex model listing. It delivers real-time data services, multi-tenancy, intelligent automation, and has “a very powerful AI-native architecture,” DDN says.

But it does not support file access, and most AI development work to date has used fast file access with slower object as the protocol to access mass unstructured data stores. Loosely speaking, in AI today, file is the past, but participates in the present with growing object access, and object will be increasingly important in the future. It will be a hybrid file+object world for AI training and inference generally for the foreseeable future.

The Inferno product adds Nvidia’s Spectrum-X switch, with RoCE adaptive routing, to Infinia storage. DDN says testing showed Inferno outperforming AWS S3-based inference stacks by 12x with sub-millisecond latency and 99 percent GPU utilization. DDN states that Inferno “is a high density and low power 1RU appliance with 2 or 4 BlueFields.” These will be Nvidia’s Arm-powered smartNICS that link to BlueFields in Nvidia-powered GPU servers.

Inferno uses “high-performance NVMe drives for ultra-low latency … supports seamless expansion,” and “is optimized for AI model training and inference.” It is “fully optimized with DDN’s AI and HPC ecosystem, ensuring streamlined deployment.” 

There is no publicly available Inferno configuration or availability information. If such an appliance used 122 TB QLC SSDs, then we could be looking at a ten-bay chassis with 1.2 PB capacity. A rack containing 30 of them would have 36 PB with an NVMe/GPUDirect for Objects-based network fabric comprised of BlueField-3s talking across Spectrum-X networking to BlueField-3s connected to GPU servers. 

DDN describes xFusionAI technology as different, stating that it is “engineered as an integrated AI storage and data management solution that balances high-speed parallel file storage with cost-efficient object storage for AI workflows … There is a single pool of storage that is logically partitioned between EXAScaler and Infinia, rather than two completely separate systems. xFusionAI can be deployed on a unified hardware infrastructure, with both EXAScaler and Infinia software components running within the same system.”

It “features a single-user interface that provides visibility and control over both EXAScaler and Infinia environments, ensuring streamlined management.” 

Infinia is not merely a backend object store, says the vendor, adding that it serves as an intelligent data management layer to complement EXAScaler’s high-speed file performance. Data can be moved between EXAScaler and Infinia using automated policies or manual tiering, allowing users to optimize storage costs and performance.

In effect, we have high-speed file storage (EXAScaler) being added to Infinia, possibly as a stopgap, until Infinia’s native file system support arrives. This means xFusionAI controllers will be more capable than Inferno’s (Infina object-only) controllers, as they must manage both file and object environments and “move” data between them. We put “move” in quotes because the data might not actually physically move; it could somehow be remapped so that it is transferred from an EXAScaler partition to the Infinia partition, and vice versa. Of course, if the Infinia partition used slower QLC drives with the EXAScaler partition using faster TLC drives, then data would physically move.

It will be interesting to understand the details of this hybrid system as they emerge. One insight is that xFusionAI gives DDN a combined file+object AI training and inference storage system to compete with VAST Data’s hybrid file+object storage, which also has block access, less important in the AI world up to now. DDN says the “product is coming soon. Pricing details are available upon request and depend on configuration, capacity, and deployment scale.”