The MinIO v2.0 AIStor update adopts Nvidia GPUDirect, BlueField superNICS and NIM microservices to bring object data faster to AI inferencing.
Back in February 2022, MinIO co-founder and CEO AB Periasamy claimed Nvidia’s GPUDirect had a “poorly thought-out design” and an “NVMe raw block storage interface with a control channel for metadata is terribly complicated for the AI/ML community.” He claimed he saw “no point in implementing GPUDirect because, in almost all of MinIO’s AI/ML high-performance deployments, the real bottleneck is either the 100 GbitE network or the NVMe drives and definitely not bounce buffers,” in which data from a storage drive is temporarily held in the host server’s DRAM.
How times change. Three years later, MinIO is adding support for Nvidia’s GPUDirect Storage for object data to its AIStor offering, saying it “drastically improves overall GPU server efficiency” and “delivers a significant increase in CPU efficiency on the Nvidia GPU server by avoiding the traditional data path through the CPU, freeing up compute for additional AI data processing while reducing infrastructure costs via support for Ethernet networking fabrics.”

It’s also embracing BlueField-3 superNICs to hook up its object storage to Nvidia’s GPU servers with MinIO’s object storage software running natively on the BlueField-3’s ARM compute platform. It says this is the “first and only object storage software,” with its 100 MB footprint, to run natively on BlueField-3, using Arm’s Scalable Vector Extension (SVE) instruction set. In effect, MinIO object storage can now operate on the BlueField-3 NIC, hooked up to a box of flash drives.

MinIO says AIStor is Spectrum-X ready, “ensuring seamless integration with Nvidia’s next-generation networking stack for AI and high-performance workloads.”
It’s adding AIStor’s promptObject API to Nvidia’s NIM microservices infrastructure, “which allows users to ‘talk’ to unstructured objects in the same way one would engage an LLM, to deliver faster inference via model optimizations for Nvidia hardware.” NIM provides pre-built Docker containers, Helm charts, and a GPU Operator to automate the deployment and management of drivers and the rest of the inference stack on the Nvidia GPU server.

Periasamy now says: “MinIO’s strong alignment with Nvidia allows us to rapidly innovate AI storage at multi-exabyte scale, leveraging their latest infrastructure. This approach delivers high-performance object storage on commodity hardware, enabling enterprises to future-proof their AI, maximize GPU utilization, and lower costs.”
The new AIStor features are open to beta customers under private preview. AIStor support for Nvidia GDS and native integration with Nvidia BlueField-3 networking platform will be released in alignment with Nvidia’s GA calendar.
Read more in MinIO AIStor blogs on GPUDirect, BlueField-3 integration, NIM Microservices, and overall AIStor Nvidia integration.