Monday, 12 May 2025
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Cloud Computing > Retrieval-augmented generation, step by step
Cloud Computing

Retrieval-augmented generation, step by step

Last updated: February 8, 2024 4:33 pm
Published February 8, 2024
Share
Footsteps in the sand along the ocean
SHARE

Typically, the use of large language models (LLMs) in the enterprise falls into two broad categories. The first one is where the LLM automates a language-related task such as writing a blog post, drafting an email, or improving the grammar or tone of an email you have already drafted. Most of the time these sorts of tasks do not involve confidential company information.

The second category involves processing internal company information, such as a collection of documents (PDFs, spreadsheets, presentations, etc.) that need to be analyzed, summarized, queried, or otherwise used in a language-driven task. Such tasks include asking detailed questions about the implications of a clause in a contract, for example, or creating a visualization of sales projections for an upcoming project launch.

There are two reasons why using a publicly available LLM such as ChatGPT might not be appropriate for processing internal documents. Confidentiality is the first and obvious one. But the second reason, also important, is that the training data of a public LLM did not include your internal company information. Hence that LLM is unlikely to give useful answers when asked about that information.

Enter retrieval-augmented generation, or RAG. RAG is a technique used to augment an LLM with external data, such as your company documents, that provide the model with the knowledge and context it needs to produce accurate and useful output for your specific use case. RAG is a pragmatic and effective approach to using LLMs in the enterprise.

In this article, I’ll briefly explain how RAG works, list some examples of how RAG is being used, and provide a code example for setting up a simple RAG framework.

How retrieval-augmented generation works

As the name suggests, RAG consists of two parts—one retrieval, the other generation. But that doesn’t clarify much. It’s more useful to think of RAG as a four-step process. The first step is done once, and the other three steps are done as many times as needed.

The four steps of retrieval-augmented generation:

  1. Ingestion of the internal documents into a vector database. This step may require a lot of data cleaning, formatting, and chunking, but this is a one-time, up-front cost. (For a quick primer on vector databases see this article.)
  2. A query in natural language, i.e., the question a human wants to ask the LLM.
  3. Augmentation of the query with data retrieved using similarity search of the vector database. This step is where context from the document store is added to the query before the query is submitted to the LLM. The prompt instructs the LLM to respond in the context of the additional content. The RAG framework does this work behind the scenes by means of a component called a retriever, which executes the search and appends the relevant context.
  4. Generation of the response to the augmented query by the LLM.

By focusing the LLM on the document corpus, RAG helps to ensure that the model produces relevant and accurate answers. At the same time, RAG helps to prevent arbitrary or nonsensical answers, which are commonly referred to in the literature as “hallucinations.”

From the user perspective, retrieval-augmented generation will seem no different than asking a question to any LLM with a chat interface—except that the system will know much more about the content in question and will give better answers.

See also  Why companies continue to struggle with cloud visibility – and code vulnerabilities

The RAG process from the point of view of the user:

  1. A human asks a question of the LLM.
  2. The RAG system looks up the document store (vector database) and extracts content that may be relevant.
  3. The RAG system passes the user’s question, plus the additional content retrieved from the document store, to the LLM.
  4. Now the LLM “knows” to provide an answer that makes sense in the context of the content retrieved from the document store (vector database).
  5. The RAG system returns the response from the LLM. The RAG system can also provide links to the documents used to answer the query.

Use cases for retrieval-augmented generation

The use cases for RAG are varied and growing rapidly. These are just a few examples of how and where RAG is being used.

Search engines

Search engines have implemented RAG to provide more accurate and up-to-date featured snippets in their search results. Any application of LLMs that must keep up with constantly updated information is a good candidate for RAG.

Question-answering systems

RAG has been used to improve the quality of responses in question-answering systems. The retrieval-based model finds relevant passages or documents containing the answer (using similarity search), then generates a concise and relevant response based on that information.

E-commerce

RAG can be used to enhance the user experience in e-commerce by providing more relevant and personalized product recommendations. By retrieving and incorporating information about user preferences and product details, RAG can generate more accurate and helpful recommendations for customers.

Healthcare

RAG has great potential in the healthcare industry, where access to accurate and timely information is crucial. By retrieving and incorporating relevant medical knowledge from external sources, RAG can assist in providing more accurate and context-aware responses in healthcare applications. Such applications augment the information accessible by a human clinician, who ultimately makes the call and not the model.

Legal

RAG can be applied powerfully in legal scenarios, such as M&A, where complex legal documents provide context for queries, allowing rapid navigation through a maze of regulatory issues.

Introducing tokens and embeddings

Before we dive into our code example, we need to take a closer look at the document ingestion process. To be able to ingest docs into a vector database for use in RAG, we need to pre-process them as follows:

  1. Extract the text.
  2. Tokenize the text.
  3. Create vectors from the tokens.
  4. Save the vectors in a database.

What does this mean?

A document may be PDF or HTML or some other format, and we don’t care about the markup or the format. All we want is the content—the raw text.

After extracting the text, we need to divide it into chunks, called tokens, then map these tokens to high-dimensional vectors of floating point numbers, typically 768 or 1024 in size or even larger. These vectors are called embeddings, ostensibly because we are embedding a numerical representation of a chunk of text into a vector space.

See also  OpenAI makes ChatGPT's image generation available as API

There are many ways to convert text into vector embeddings. Usually this is done using a tool called an embedding model, which can be an LLM or a standalone encoder model. In our RAG example below, we’ll use OpenAI’s embedding model.

A note about LangChain

LangChain is a framework for Python and TypeScript/JavaScript that makes it easier to build applications that are powered by language models. Essentially, LangChain allows you to chain together agents or tasks to interact with models, connect with data sources (including vector data stores), and work with your data and model responses.

LangChain is very useful for jumping into LLM exploration, but it is changing rapidly. As a result, it takes some effort to keep all the libraries in sync, especially if your application has a lot of moving parts with different Python libraries in different stages of evolution.
 A newer framework, LlamaIndex, also has emerged. LlamaIndex was designed specifically for LLM data applications, so has more of an enterprise bent.

Both LangChain and LlamaIndex have extensive libraries for ingesting, parsing, and extracting data from a vast array of data sources, from text, PDFs, and email to messaging systems and databases. Using these libraries takes the pain out of parsing each different data type and extracting the content from the formatting. That itself is worth the price of entry.

A simple RAG example

We will build a simple “Hello World” RAG application using Python, LangChain, and an OpenAI chat model. Combining the linguistic power of an LLM with the domain knowledge of a single document, our little app will allow us to ask the model questions in English, and it will answer our questions by referring to content in our document.

For our document, we’ll use the text of President Biden’s February 7, 2023, State of the Union Address. If you want to try this at home, you can download a text document of the speech at the link below.

download

 

Text file of President Biden’s February 7, 2023, State of the Union Address

A production-grade version of this app would allow private collections of documents (Word docs, PDFs, etc.) to be queried with English questions. Here we are building a simple system that does not have privacy, as it sends the document to a public model. Please don’t run this app using private documents.

We will use the hosted embedding and language models from OpenAI, and the open-source FAISS (Facebook AI Similarity Search) library as our vector store, to demonstrate a RAG application end to end with the least possible effort. In a subsequent article we will build a second simple example using a fully local LLM with no data sent outside the app. Using a local model involves more work and more moving parts, so it is not the ideal first example.

To build our simple RAG system we need the following components:

  1. A document corpus. Here we will use just one document.
  2. A loader for the document. This code extracts text from the document and pre-processes (tokenizes) it for generating an embedding.
  3. An embedding model. This model takes the pre-processed document and creates embeddings that represent the document chunks.
  4. A vector data store with an index for similarity searching.
  5. An LLM optimized for question answering and instruction.
  6. A chat template for interacting with the LLM.
See also  Protecting LLM applications with Azure AI Content Safety

The preparatory steps:

pip install -U  langchain
pip install -U langchain_community
pip install -U langchain_openai

The source code for our RAG system:

# We start by fetching a document that loads the text of President Biden’s 2023 State of the Union Address

from langchain_community.document_loaders import TextLoader
loader = TextLoader('./stateOfTheUnion2023.txt')

from langchain.text_splitter import CharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_openai.embeddings import OpenAIEmbeddings
import os
os.environ["OPENAI_API_KEY"] =<you will need to get an API ket from OpenAI>

# We load the document using LangChain’s handy extractors, formatters, loaders, embeddings, and LLMs

documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)

# We use an OpenAI default embedding model
# Note the code in this example does not preserve privacy

embeddings = OpenAIEmbeddings()

# LangChain provides API functions to interact with FAISS

db = FAISS.from_documents(texts, embeddings)  

# We create a 'retriever' that knows how to interact with our vector database using an augmented context
# We could construct the retriever ourselves from first principles but it's tedious
# Instead we'll use LangChain to create a retriever for our vector database

retriever = db.as_retriever()
from langchain.agents.agent_toolkits import create_retriever_tool
tool = create_retriever_tool(
    retriever,
    "search_state_of_union",
    "Searches and returns documents regarding the state-of-the-union."
)
tools = [tool]

# We wrap an LLM (here OpenAI) with a conversational interface that can process augmented requests

from langchain.agents.agent_toolkits import create_conversational_retrieval_agent

# LangChain provides an API to interact with chat models

from langchain_openai.chat_models import ChatOpenAI
llm = ChatOpenAI(temperature = 0)
agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True)

input = "what is NATO?"
result = agent_executor.invoke({“input": input})

# Response from the model

input = "When was it created?"
result = agent_executor.invoke({“input": input})

# Response from the model 
rag output IDG

As shown in the screenshot above, the model’s response to our first question is quite accurate:

NATO stands for the North Atlantic Treaty Organization. It is an intergovernmental military alliance formed in 1949. NATO’s primary purpose is to ensure the collective defense of its member countries. It is composed of 30 member countries, mostly from North America and Europe. The organization promotes democratic values, cooperation, and security among its members. NATO also plays a crucial role in crisis management and peacekeeping operations around the world.

Finished chain.

And the model’s response to the second question is exactly right:

NATO was created on April 4, 1949.

Finished chain.

As we’ve seen, the use of a framework like LangChain greatly simplifies our first steps into LLM applications. LangChain is strongly recommended if you’re just starting out and you want to try some toy examples. It will help you get right to the meat of retrieval-augmented generation, meaning the document ingestion and the interactions between the vector database and the LLM, rather than getting stuck in the plumbing.

For scaling to a larger corpus and deploying a production application, a deeper dive into local LLMs, vector databases, and embeddings will be needed. Naturally, production deployments will involve much more nuance and customization, but the same principles apply. We will explore local LLMs, vector databases, and embeddings in more detail in future articles here.

Copyright © 2024 IDG Communications, .

Contents
How retrieval-augmented generation worksUse cases for retrieval-augmented generationIntroducing tokens and embeddingsA note about LangChainA simple RAG example

Source link

TAGGED: generation, Retrievalaugmented, Step
Share This Article
Twitter Email Copy Link Print
Previous Article cybersecurity padlock icons overlayed on a company meeting What Goes Into a Strong Cybersecurity Culture? | DCN
Next Article LimaCharlie, Interview With CEO Maxime Lamothe-Brassard LimaCharlie, Interview With CEO Maxime Lamothe-Brassard
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Amazon Web Services commits $11B for data center in Indiana

INDIANAPOLIS, Ind. – Indiana Gov. Eric Holcomb introduced final Thursday that Amazon Net Companies (AWS)…

April 28, 2024

WeFi Launches Wenix, A Telegram Game Directly Contributing to Token Mining

Charlestown, Saint Kitts and Nevis, January twenty ninth, 2025, Chainwire WeFi, the world’s first Deobank…

February 27, 2025

Google DeepMind unveils protein design system

Google DeepMind has unveiled an AI system known as AlphaProteo that may design novel proteins…

September 6, 2024

Microsoft’s agentic AI OmniParser rockets up open source charts

Be a part of our every day and weekly newsletters for the most recent updates…

November 4, 2024

Microsoft to invest 2.2 bn euros in Spain data centers

Microsoft's deliberate funding in Spain's Aragon area is now touching practically 6.7 billion euros. Microsoft…

July 6, 2024

You Might Also Like

Cloud growth brings new cybersecurity risks for Singapore businesses
Cloud Computing

Cloud growth brings cybersecurity risks for Singapore businesses

By saad
IBM to offer watsonx AI tools on Oracle Cloud Infrastructure
Cloud Computing

IBM to offer watsonx AI tools on Oracle Cloud Infrastructure

By saad
Five cloud providers operating under strict data legislation
Cloud Computing

Five cloud providers operating under strict data legislation

By saad
Space and Time Launches on Mainnet to Power a New Generation of Data-Driven Crypto Apps
Investments

Space and Time Launches on Mainnet to Power a New Generation of Data-Driven Crypto Apps

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.OkNoPrivacy policy
You can revoke your consent any time using the Revoke consent button.Revoke consent