DDN is making a splash at SC25 by launching new systems and a unified CORE AI and HPC data plane covering its EXAScaler Lustre (file) and Infinia (object) storage systems, with new A1400X3i and A12200 hardware.
The company says its storage systems support 1 million GPUs for more than 11,000 customers. EXAScaler is a Lustre parallel filesystem storage product using A1400X3 series generation hardware. It is used in many HPC and supercomputing installations, such as the Nebius AI Neocloud, and has topped the IO500 10-node category storage list. Infinia is its freshly developed and AI-focussed object storage system and it uses the A12200 hardware on premises.

DDN CEO Alex Bouzari claimed: “We are to data what Nvidia is to compute. Together, we are building the intelligent foundation of the AI Factory era.”
The company says it is replacing the patchwork of separate HPC and AI systems with one unified data engine and data plane that “unites DDN’s proven EXAScaler and Infinia technologies into a single, high-performance data fabric that feeds, manages, and optimizes the entire AI lifecycle—from simulation to training, inference, and retrieval-augmented generation (RAG).” Think of CORE as a software abstraction layer sitting above EXAScaler and Infinia on-premises deployments and public cloud instances, managed by DDN Insight software.
Customers can run DDN CORE on-premises or in the cloud, with claimed consistent AI data performance across any environment. The cloud possibilities include Google Cloud Managed Lustre, and Infinia software running in Oracle Cloud Infrastructure (OCI). DDN says CORE also supports the CoreWeave, Nebius, and Scaleway neoclouds.
It claims that CORE provides “up to 15× faster checkpointing and 4× faster model loading, driving >99 percent GPU utilization in production AI environments,” and “Integrated caching and token reuse deliver 25× faster response and 60 percent lower cost per query,” as well as “up to 11× higher performance-per-watt and 40 percent lower power consumption,” without identifying the systems used in the comparisons.
The new A1400X3i, SE-2 and SE-4, EXAScaler Lustre parallel filesystem storage array systems enclosures are 2 RU in size. They use AMD Genoa CPUs in their controllers, and these manage data access to NVMe SSDs connected over PCIe gen 5. These systems are integrated with Nvidia BlueField-3 DPU/NICs and Spectrum-X Ethernet fabrics. DDN says the A1400X3 series is the core AI data platform for Nvidia DGX SuperPODs and Nvidia Cloud Providers.
The A1400X3i offers:
- 140 GBps sequential read throughput and 110 GB/s write throughput, up to 70 percent more than the previous generation
- 4 million IOPS per node and up to 70 million IOPS in a single rack compared to the A1400X3’s 1.4 million IOPS per node.
- 40 percent data-center savings in power, cooling, and space
The new AI2200 Infinia object storage system is claimed to be “doubling throughput and tokens-per-watt for hyperscale AI factories.,” with no base comparison system identified.
DDN is launching an AI FASTRACK program to speed deployment of its systems, promising deployment in days and weeks instead of months. It includes the Enterprise AI HyperPOD turnkey config, a cloud.dd.com portal for launching certified DDN environments, and Ignite AI to Converts existing EXAScaler HPC clusters into AI pipelines within weeks.
It also includes the general availability of Google Cloud Managed Lustre, based on EXAScaler, and Infinia on Oracle Cloud Infrastructure. There is no information yet available about Infinia running in the Google Cloud or Lustre in OCI.
There is as yet also no availability or datasheet information for DDN’s A1400X3i or A12200 storage systems. We have asked DDN about the CORE cloud coverage, data sheet and availability information.
Read a blog to find out more about how DDN views its HPC and AI storage system prowess.
Bootnote

DDN is supplying storage for the €544 million ($630.2 million), 1 exaflop, AI and research, Alice Recoque supercomputer in France. This is being built by Eviden (ATOS) and will use AMD EPYC Venice CPUs, Instinct MI430X GPUs (432 GB HBM4 and 19.6 TB/s per GPU), and FPGAs, Eviden BXI v3 networking, all integrated into a BullSequana XH3500 platform. Altogether there will be 94 racks in the system, which will be installed in France’s CEA’s Very Large Computing Center (TGCC) at Bruyères-le-Châtel, south west of Paris.
DDN President and co-founder Paul Bloch said: “This deployment reinforces DDN’s leadership in data-intelligence infrastructure for advanced HPC and AI. By delivering extreme performance, efficiency and data insight at massive scale, we help accelerate discovery, strengthen European competitiveness to tackle high-impact challenges.”
Alice Recoque was a French computer scientist who died in 2021, aged 91.








