Nvidia deepens AI datacenter push as DDN, HPE, NetApp integrate storage stacks

Nvidia is turning itself into an AI datacenter infrastructure company and storage suppliers are queuing up to integrate their products into its AI software stack. DDN, HPE, and NetApp followed VAST Data in announcing their own integrations at Taiwan’s Computex 2025 conference.

DDN and Nvidia launched a reference design for Nvidia’s AI Data Platform to help businesses feed unstructured data like documents, videos, and chat logs to AI models. Santosh Erram, DDN’s global head of partnerships, declared: “If your data infrastructure isn’t purpose-built for AI, then your AI strategy is already at risk. DDN is where data meets intelligence, and where the future of enterprise AI is being built.”

The reference design combines DDN Infinia with Nvidia’s NIM and NeMo Retriever microservices, RTX PRO 6000 Blackwell Server Edition GPUs, and its networking. Nvidia’s Pat Lee, VP of Enterprise Strategic Partnerships, said: “Together, DDN and Nvidia are building storage systems with accelerated computing, networking, and software to drive AI applications that can automate operations and amplify people’s productivity.” Learn more here.

HPE

Jensen Huang, Nvidia
Jensen Huang

Nvidia founder and CEO Jensen Huang stated: “Enterprises can build the most advanced Nvidia AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI.”

HPE president and CEO Antonio Neri spoke of a “strong collaboration” with Nvidia as HPE announced server, storage, and cloud optimizations for Nvidia’s AI Enterprise cloud-native offering of NIM, NeMo, and cuOpt microservices, and Llama Nemotron models. The Alletra MP X10000 unstructured data storage array node product will introduce an SDK for Nvidia’s AI Data Platform reference design to offer customers accelerated performance and intelligent pipeline orchestration for agentic AI.

The SDK will support flexible inline data processing, vector indexing, metadata enrichment, and data management. It will also provide remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000. 

Antonio Neri, HPE
Antonio Neri

HPE ProLiant Compute DL380a Gen12 servers featuring RTX PRO 6000 Blackwell GPUs will be available to order on June 4. HPE OpsRamp cloud-based IT operations management (ITOM) software is expanding its AI infrastructure optimization features to support the upcoming RTX PRO 6000 Blackwell Server Edition GPUs for AI workloads. The OpsRamp integration with Nvidia’s infrastructure including its GPUs, BlueField DPUs, Spectrum-X Ethernet networking, and Base Command Manager will provide granular metrics to monitor the performance and resilience of the HPE-Nvidia AI infrastructure.

HPE’s Private Cloud AI will support the Nvidia Enterprise AI Factory validated design. HPE says it will also support feature branch model updates from Nvidia AI Enterprise, which include AI frameworks, Nvidia NIM microservices for pre-trained models, and SDKs. Support for feature branch models will allow developers to test and validate software features and optimizations for AI workloads.

NetApp

NetApp’s AIPod product – Nvidia GPUs twinned with NetApp ONTAP all-flash storage – now supports the Nvidia AI Data Platform reference design, enabling RAG and agentic AI workloads. The AIPod can run Nvidia NeMo microservices, connecting them to its storage. San Jose-based NetApp has been named as a key Nvidia-Certified Storage partner in the new Enterprise AI Factory validated design.

Nvidia’s Rob Davis, VP of Storage Technology, said: “Agentic AI enables businesses to solve complex problems with superhuman efficiency and accuracy, but only as long as agents and reasoning models have fast access to high-quality data. The Nvidia AI Data Platform reference design and NetApp’s high-powered storage and mature data management capabilities bring AI directly to business data and drive unprecedented productivity.”

NetApp told us “the AIPod Mini is a separate solution that doesn’t incorporate Nvidia technology.”

Availability

  • HPE Private Cloud AI will add feature branch support for Nvidia AI Enterprise by Summer.
  • HPE Alletra Storage MP X10000 SDK and direct memory access to Nvidia accelerated computing infrastructure will be available starting Summer 2025.
  • HPE ProLiant Compute DL380a Gen12 with RTX PRO 6000 Server Edition will be available to order starting June 4, 2025.
  • HPE OpsRamp Software will be available in time to support RTX PRO 6000 Server Edition.

Bootnote

Dell is making its own AI Factory and Nvidia-related announcements at its Las Vegas-based Dell Technologies World conference. AI agent builder DataRobot announced its inclusion in the Nvidia Enterprise AI Factory validated design for Blackwell infrastructure. DataRobot and Nvidia are collaborating on multiple agentic AI use cases that leverage this validated design. More information here.