Tuesday, April 1, 2025

 The vectors generated by embedding models are often stored in a specialized vector database. Vector databases are optimized for storing and retrieving vector data efficiently. Like traditional databases, vector databases can be used to manage permissions, metadata and data integrity, ensuring secure and organized access to information. They also tend to include update mechanisms so newly added texts are indexed and ready to use quickly.

The difference that a vector database and Retrieval Augmented Generation makes might be easier to explain with an example. When a chatbot powered by LLama2 LLM is asked about an acronym that was not part of its training text, it tends to guess and respond with an incorrect expansion and elaborating on what that might be. It does not even hint that it might be making things up. This is often referred to as hallucination. But if an RAG system is setup with access to documentation that explains what the acronym stands for, the relevant information is indexed and becomes part of the vector database, and the same prompt will now give a more pertinent information. With RAG, the LLM provides correct answers.

If the prompt was provided with the relevant documents that contain an answer, which is referred to as augmenting the prompt, the LLM can leverage that against the vector database and provide more compelling and coherent answers that would turn out to be knowledgeable as well as opposed to the hallucination referred above. By automating this process, we can make the chat responses to be more satisfactory every time. This might require additional steps of building a retrieval system backed by a vector database. It might also involve extra steps of data processing and managing the generated vectors. RAG also has added benefits for the LLM to consolidate multiple sources of data into a readable output tailored to the user's prompt. RAG applications can also incorporate proprietary data which makes it different from the public data that most LLM are trained on. The data can be up to date so that the LLM is not restricted to the point-in-time that it was trained on. RAG reduces hallucinations and allows the LLM to provide citations and query statistics to make the processing more transparent to the users. As with all retrieval systems, fine-grained data access control also brings about its own advantages.

There are four steps for building Retrieval-Augmented Generation (RAG):

1. Data Augmentation

a. Objective: Prepare data for a real-time knowledge base and contextualization in LLM queries by populating a vector database.

b. Process: Integrate disparate data using connectors, transform and refine raw data streams, and create vector embeddings from unstructured data. This step ensures that the latest version of proprietary data is instantly accessible for GenAI applications.

2. Inference

a. Objective: Connect relevant information with each prompt, contextualizing user queries and ensuring GenAI applications handle responses accurately.

b. Process: Continuously update the vector store with fresh sensor data. When a user prompt comes in, enrich and contextualize it in real-time with private data and data retrieved from the vector store. Stream this information to an LLM service and pass the generated response back to the web application.

3. Workflows

a. Objective: Parse natural language, synthesize necessary information, and use reasoning agents to determine the next steps to optimize performance.

b. Process: Break down complex queries into simpler steps using reasoning agents, which interact with external tools and resources. This involves multiple calls to different systems and APIs, processed by the LLM to give a coherent response. Stream Governance ensures data quality and compliance throughout the workflow.

4. Post-Processing

a. Objective: Validate LLM outputs and enforce business logic and compliance requirements to detect hallucinations and ensure trustworthy answers.

b. Process: Use frameworks like BPML or Morphir to perform sanity checks and other safeguards on data and queries associated with domain data. Decouple post-processing from the main application to allow different teams to develop independently. Apply complex business rules in real-time to ensure accuracy and compliance and use querying for deeper logic checks.

These steps collectively ensure that RAG systems provide accurate, relevant, and trustworthy responses by leveraging real-time data and domain-specific context

#codingexercise: CodingExercise-03-31-2025.docx

No comments:

Post a Comment