Pure Storage unveils anti-hallucination AI tech at Nvidia’s GTC

Pure Storage is announcing anti-hallucinatory Retrieval-Augmented Generation (RAG) reference architectures at Nvidia’s GTC event to add an organization’s own data to GenAI chatbots and make their answers more accurate.

GTC, Nvidia’s Global Technology Conference, is being held this week in San Jose, CA, with an AI focus. 

Rob Lee, Pure CTO, said: “Pure Storage recognized the rising demand for AI early on, delivering an efficient, reliable, and high-performance platform for the most advanced AI deployments. Embracing our longstanding collaboration with Nvidia, the latest validated AI reference architectures and generative AI proofs of concept emerge.”

There are four aspects to Pure’s news:

  • Retrieval-Augmented Generation (RAG) Pipeline for AI Inference: This uses Nvidia’s NeMo Retriever microservices and GPUs and Pure’s all-flash storage for enterprises using their own internal data for faster AI training. 
  • Vertical RAG Development: First, Pure Storage has created a financial services RAG system in conjunction with Nvidia to summarize and query massive datasets with higher accuracy than off-the-shelf LLMs. Financial services can use AI to create instant summaries and analysis from various financial documents and other sources. Additional RAGs for healthcare and public sector are to be released. 
  • Certified Nvidia OVX Server Storage Reference Architecture: Pure Storage has achieved OVX Server Storage validation, providing enterprise customers and channel partners with storage reference architectures, validated against benchmarks to provide a strong infrastructure foundation for cost and performance-optimized AI hardware and software solutions. This validation complements Pure Storage’s certification for Nvidia’s DGX BasePOD announced last year. 
  • Expanded Investment in AI Partner Ecosystem: Pure Storage has new partnerships with ISVs like Run.AI and Weights & Biases. Run.AI optimizes GPU utilization through advanced orchestration and scheduling, while the Weights & Biases AI Development platform enables ML teams to build, evaluate, and govern the model development life cycle.

In a supporting quote supplied by Pure, ESG principal analyst Mike Leone said: ”Rather than investing valuable time and resources in building an AI architecture from scratch, Pure’s proven frameworks not only mitigate the risk of expensive project delays but also guarantee a high return on investment for AI team expenditures like GPUs.”

Read more in a Pure blog – Optimize GenAI Applications with Retrieval-augmented Generation from Pure Storage and Nvidia – which should be live from 10pm GMT/3pm PT, March 18.

Bootnote

Nvidia’s OVX systems are designed to build virtual worlds using 3D software applications and to operate immersive digital twin simulations in Nvidia’s Omniverse Enterprise environment. They are separate from Nvidia’s AI-focused DGX GPU servers.