DDN unveils Infinia 2.0 object storage for faster AI data pipelines

DDN has announced the Infinia 2.0 object store for AI training and inference, making big claims for up to 100x improvement in AI data acceleration and 10x gain in datacenter and cloud cost efficiency. 

Infinia is a object storage system designed from the ground up with a key-value-based foundation. We have recently written about Infinia’s technology here.

DDN CEO and co-founder Alex Bouzari stated: “85 of the Fortune 500 businesses run their AI and HPC applications on DDN’s Data Intelligence Platform. With Infinia, we accelerate customers’ data analytics and AI frameworks with orders-of-magnitude faster model training and accurate real-time insights while future-proofing GPU efficiency and power usage.”

His DDN founding partner, president Paul Bloch, said: “Our platform has already been deployed at some of the world’s largest AI factories and cloud environments, proving its capability to support mission-critical AI operations at scale.” DDN has previously said Elon Musk’s xAI is a customer.

AI data storage is front and center in DDN’s intentions for Infinia. CTO Sven Oehme declared: “AI workloads require real-time data intelligence that eliminates bottlenecks, accelerates workflows, and scales seamlessly in complex model listing, pre and post-training, RAG, Agentic AI, Multimodal environments, and inference. Infinia 2.0 was designed to maximize AI value in these areas, while also delivering real-time data services, highly efficient multi-tenancy, intelligent automation, and a very powerful AI-native architecture.”

Infinia includes event-driven data movement, multi-tenancy, a hardware-agnostic design, more than 99.999 percent uptime, up to 10x always-on data reduction, fault-tolerant network erasure coding, and automated QoS. It lines up with other fast, recently accelerated object stores such as Cloudian’s HyperStore, MinIO, Quantum Myriad, Scality’s RING, and VAST Data.

The Infinia software works with Nvidia’s Nemo and NIMS microservices, GPUs, Bluefield 3 DPUs, and Spectrum-X networking to speed AI data pipelines. DDN claims it “integrates AI inference, data analytics, and model preparation across core, cloud, and edge environments.” Infinia is integrated with Trino, Apache Spark, TensorFlow, PyTorch, “and other AI frameworks to accelerate AI applications.”

Nvidia DGX platforms VP Charlie Boyle commented: “Combined with Nvidia accelerated computing and enterprise software, platforms like DDN Infinia 2.0 provide businesses the infrastructure they need to put their data to work.”

Infinia, with TBps bandwidth and sub-millisecond latency, “outperforms AWS S3 Express by 10x, delivering unprecedented S3 performance.” Other DDN superlatives include 100x AI data acceleration, 10x faster AI workloads with 100x better efficiency than Hadoop based on “independent benchmark tests,” 100x faster metadata processing, 100x faster object lists per second than AWS, and 25x faster querying for AI model training and inference. 

An Infinia system “scales from terabytes (TB) to exabytes (EB), supporting up to 100,000-plus GPUs and 1 million simultaneous clients in a single deployment, enabling large-scale AI innovation.” DDN says it is “proven in real-world datacenter and cloud deployments from 10 to 100,000-plus GPUs for unmatched efficiency and cost savings at any scale, ensuring maximum GPU utilization.”

Supermicro CEO Charles Liang said: “By combining DDN’s data intelligence platform Infinia 2.0 with Supermicro’s cutting-edge server workload-optimized solutions, the companies collaborated on one of the world’s largest AI datacenters.” We think this refers to xAI’s Colossus datacenter expansion for Grok 3 phase 1. DDN provides more information on Infinia here.