DataStax builds RAGStack to counter GenAI hallucinations


DataStax is providing a retrieval augmented generation RAGStack capability, through a partnership with LlamaIndex, to help ensure GenAI applications using Astra DB suffer fewer hallucinations.

Calling itself a GenAI data company, DataStax provides Astra DB as a service. Astra DB is a scale-out, cloud-native Cassandra-based NoSQL database that incorporates vector embeddings. Generative AI software like ChatGPT uses vector embeddings, and can produce imaginary results – hallucinations – that can be countered by retrieving information to augment the generation process, called retrieval augmented generation (RAG). LlamaIndex provides a framework to connect specific data sources for information retrieval, with a GenAI large language model (LLM).

Davor Bonaci, DataStax EVP and CTO, stated: “By incorporating LlamaIndex into RAGStack, we are providing developers with a comprehensive GenAI stack that simplifies the complexities of RAG implementation, while offering long-term support and compatibility assurance.”

GenAI apps use LLM technology. A Pinecone blog makes four points about LLMs:

  • They are static – LLMs are “frozen in time” and lack up-to-date information. It is not feasible to update their gigantic training data sets.
  • They lack domain-specific knowledge – LLMs are trained for generalized tasks, meaning they do not know your company’s private data.
  • They function as “black boxes” – it’s not easy to understand which sources an LLM was considering when they arrived at their conclusions.
  • They are inefficient and costly to produce – few organizations have the financial and human resources to produce and deploy foundation models.

If enterprise GenAI chatbots produce false or inadequate results, people will stop using them. Enterprises cannot train LLMs on their own and general data from scratch. It is too costly and time-consuming. Adding in reliable external data sources to an existing LLM can fill in gaps in the GenAI’s knowledge base and generate results that are more complete, accurate, and timely.

The LlamaIndex RAG framework is for ingesting, indexing, and querying external data when building generative AI apps and enables DataStax-based GenAI apps to use data external to the LLM – an organization’s own data. It can add its own information store to a GenAI app to, for example, make marketing materials reflect current product availability and support responses more accurate.

Jerry Liu, co-founder and CEO of LlamaIndex, added: “Together, we’re reshaping the RAG landscape by offering a simplified journey for not only enterprises but also developers looking to put GenAI applications into production with ease.”

RAG can improve a GenAI app’s results, as shown in a Stackoverflow blog, but, as the blog notes, “the integration of external knowledge introduces increased computational complexity, latency, and prompt complexity, potentially leading to longer inference times, higher resource utilization, and longer development cycles.”

DataStax says the inclusion of LlamaIndex enhanced indexing and parsing capabilities in RAGStack enables users to use LlamaIndex alone, or in combination with the GenAI app building LangChain and its ecosystem including LangServe, LangChain Templates, and LangSmith. 

DataStax is also previewing LlamaIndex’s LlamaParse API through which PDFs can to be used in RAG processing. It provides better data extraction from PDF tables by running recursive retrievals. In other words, it helps turn a PDF document into vector embeddings. LlamaParse only supports PDF files at present but will probably get extended.

There are DataStax RAG pipeline notebook examples of how to use LlamaIndex and LlamaParse with Astra DB. LlamaIndex is available on Github as is LlamaParse.