The latest, v6.0, major release of IBM’s Storage Scale has a Data Acceleration Tier (DAT), a high-performance NVMeoF-based storage layer designed to deliver extreme IOPS and ultra-low latency for real-time AI inferencing workloads.
Storage Scale, originally called GPFS, is IBM’s parallel file system software and is popular in supercomputing and high-performance computing circles as well as in enterprise computing shops adopting HPC-style IT for workloads needing fast file IO, such as GenAI. It is adapting to the GenAI era by speeding data delivery to GPU servers with a new tier of storage for low-latency and high-speed access; the DAT layer, which uses Asymmetric Data Replication. This provides a ‘performance’ data replica for fast reads and a ‘reliable’ data replica for data safety, protected by erasure coding.

The performance copy of data is a persistent, non-redundant cache. It is maintained with lazy consistency, meaning that missed updates (eg, performance drive offline) will be corrected on the next read after the drive is back online.
There are two deployment options for the performance replica or pool, both maintaining the reliable pool on the Storage Scale System 6000 (ESS 6000) appliance;
- Centralized DAT: This configuration is optimized for ease of use and meets general AI workload IOPS requirements, with the performance pool deployed on Storage Scale System 6000 using NVMeoF for client access. It’s achieved 29 million IOPS with 32 nodes.
- Distributed DAT: This configuration is optimized for higher, indeed extreme, AI workload IOPS needs, with the performance pool deployed on client-local storage (GPU Server DAS), and performance determined by the drive configuration and compute capabilities of the client nodes. It’s achieved 16 million IOPS with 16 nodes.
An ESS 6000 supports a single DATA filesystem and a DAT filesystem may not use more than one ESS 6000 and that should be the all NVMe flash configuration, not a hybrid config.


All-in-all, IBM says, Storage Scale v6.0 has these AI-relevant features:
- Content Aware Storage (CAS): introduces async notifications, enabling faster, event-driven data ingesting into AI inferencing workflows
- Expanded Nvidia integration with CNSA (Container Native Storage Access) support for GPUDirect Storage, enhanced Base Command Manager support, and Nvidia Nsight integration.
- Aligned with NvidiaA BasePOD/SuperPOD and Grace Blackwell platforms, Nvidia cloud platform and Nvidia-certified storage certifications, ensuring performance and compatibility.
It also has 1-button GUI upgrades, enhanced prechecks, and unified protocol deployment simplify operations. API-driven control plane enhancements help enable automation of features like quotas. There are improved problem determination diagnostics for expels and snapshots to streamline root cause analysis and remediation.
IBM says there should be added support for NFS nconnect for Storage Scale 6.0.0 which enables high-throughput AI workloads over standard Ethernet. It intends to add support for SMB Multichannel for future releases of Storage Scale which would enable high-speed data acquisition from Windows-based instruments and faster subsequent data processing by Windows-based applications.
IBM intends to remove a DAT restriction saying it cannot support a remote cluster in a future release of the software.
Bootnote
From a Microosoft website: The nconnect
mount option allows you to specify the number of connections (network flows) that should be established between the NFS client and NFS endpoint up to a limit of 16. Traditionally, an NFS client uses a single connection between itself and the endpoint. Increasing the number of network flows increases the upper limits of I/O and throughput significantly.