DDN announced a fourth generation of the A³I (Accelerated, Any-Scale AI) system at SC24, increasing both its sequential data access and random read IOPS.
The A³I systems are storage systems focused on supporting Nvidia GPUs processing AI workloads, and support Nvidia’s GPU servers, such as the DGX A100, DGX Pod, and DGX SuperPod. DDN bases them on its SFA (Storage Fusion Architecture) EXAScaler storage arrays running Lustre parallel file system software. There have been three generations with each new generation increasing the IO speed, but not, until now, the IOPS rating. A fourth generation is being announced, the AI400X3T system, built in close collaboration with Nvidia, according to DDN, and this increases both the IO speed and the maximum number of IOPS.
DDN chief technology officer Sven Oehme said: “At DDN, we’re all about tearing down the roadblocks that companies hit when they try to scale AI and HPC.”
The existing AI400X2 and AI200X2 options now support denser disk enclosures, reducing costs per petabyte and preserving valuable datacenter space. The AI200X2 appliance scales up to 20 PB per rack and the AI400X2 handles up to five QLC appliances per rack. The underlying EXAScaler storage system introduces Client-Side Compression, which reduces data size without impacting performance, overcoming competing server-side compression performance issues.
The EXAScaler software now features native multi-tenancy, allowing secure data segregation for cloud providers and multi-user enterprise environments. The EXAScaler Management Framework (EMF) provides improved monitoring and health reporting tools.
The AI400 generations have developed like this:
AI400X all-flash array
- Up to 48 GBps read
- Up to 34 GBps write
- 3 million IOPS
- 8 x EDR/HDR100 InfiniBand or 100GbE
- PCIe gen 3
AI400X2 all-flash array
- Up to 90 GBps read
- Up to 65 GBps write
- 3 million IOPS
- 8 x HDR InfiniBand or 100/200GbE
- PCIe gen 4
The AI400X2 supports up to 5 PB of QLC flash per rack.
AI400X2T – all-flash Turbo appliances
- Up to 120 GBps read
- Up to 75 GBps write
- 3 million IOPS
A single AI400X2T delivers 47 GBps read and 43 GBps write bandwidth to a single HGX H100 GPU server. Each AI400X2T appliance delivers over 110 GBps and 3 million IOPS directly to HGX H100 systems.
AI400X3 – based on SFA400X3 all-flash array with 2 x AMD Genoa CPUs
- Up to 145 GBps read
- Up to 95-116 GBps write
- 1.5/5 million IOPS (48/64 core SKUs)
- 4x QSFP112 InfiniBand NDR/400GbE (Nvidia BlueField-3 SuperNIC, for Spectrum-X)
- 4x OSFP InfiniBand NDR/400GbE
- 8x QSFP112 InfiniBand NDR200/200GbE
The SFA400X3 has NVMe-oF and SAS expansion coming in early 2025 to increase its scalability:
- NVMe-oF SE2420, 24-Bay Enclosures (2025 Q2 target)
- SAS4 SS9024, 90-Bay Enclosures (2025 Q4 target)
Get more background AI400X3 information here.