Home >Backend Development >Python Tutorial >Building a simple RAG agent with LlamaIndex

Building a simple RAG agent with LlamaIndex

DDD
DDDOriginal
2024-10-01 06:38:02395browse

Building a simple RAG agent with LlamaIndex

LlamaIndex is a framework for building context-augmented generative AI applications with LLMs.

What is context augmentation?

Context augmentation refers to a technique where additional relevant information or context is provided to an LLM model, improving its understanding and responses to a given query. This augmentation typically involves retrieving, integrating, or attaching external data sources such as documents, embeddings, to the model's input. The goal is to make the model more informed by providing it with necessary context that helps it give better, more accurate and nuanced answers. Retrieval augmented generation(RAG) is the most popular example of context augmentation.

What are agents?

Agents are automated reasoning and decision engines powered by LLMs that use tools to perform research, data extraction, web search, and more tasks. They can be used for simple use cases like question-answering based on the data to being able to decide and take actions in order to complete tasks.

In this post, we'll build a simple RAG agent using LlamaIndex.

Building a RAG agent

Installing dependencies

We'll be using Python to build simple RAG agent using LlamaIndex. Let's first install required dependencies as below:

pip install llama-index python-dotenv

Setting up LLM and loading documents

We'll be using OpenAI's gpt-4o-mini as the LLM. You need to put the API key in environment variables file. You can read more about setting up a local LLM using LLamaIndex here.

from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings
from llama_index.llms.openai import OpenAI
from dotenv import load_dotenv

# Load environment variables (e.g., OPENAI_API_KEY)
load_dotenv()

# Configure OpenAI model
Settings.llm = OpenAI(model="gpt-4o-mini")

# Load documents from the local directory
documents = SimpleDirectoryReader("./data").load_data()

# Create an index from documents for querying
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()

  • First we configure the LLM model using OpenAI and specifying the gpt-4o-mini model. You can switch to other available models/LLMs depending on your needs.
  • Then, we use SimpleDirectoryReader to load documents from the local ./data directory. This reader scans through the directory, reads files, and structures the data for querying.
  • Next, we create a vector store index from the loaded documents, allowing us to perform efficient vector-based retrieval during query execution.

Creating custom functions for agent

Now, let's define some basic functions that the agent can use to perform tasks.

def multiply(a: float, b: float) -> float:
    """Multiply two numbers and returns the product"""
    return a * b

def add(a: float, b: float) -> float:
    """Add two numbers and returns the sum"""
    return a + b

Creating tools for the agent

Next, we'll create tools from the functions and the query engine that we defined earlier, which the agent will use to perform tasks. These tools acts as utilities that the agent can leverage when handling different types of queries.

from llama_index.core.tools import FunctionTool, QueryEngineTool

# Wrap functions as tools
add_tool = FunctionTool.from_defaults(fn=add)
multiply_tool = FunctionTool.from_defaults(fn=multiply)

# Create a query engine tool for document retrieval
space_facts_tool = QueryEngineTool.from_defaults(
    query_engine,
    name="space_facts_tool",
    description="A RAG engine with information about fun space facts."
)

  • The FunctionTool wraps the add and multiply function and exposes them as tools. The agent can now access these tools to perform calculations.
  • The QueryEngineTool wraps the query_engine to allow the agent to query and retrieve information from the vector store. We've named it space_facts_tool with a description, indicating that this tool can retrieve information about space facts. You can ingest anything and customize the tool as per the ingested data.

Creating the agent

We will now create the agent using ReActAgent. The agent will be responsible for deciding when to use the tools and how to respond to queries.

from llama_index.core.agent import ReActAgent

# Create the agent with the tools
agent = ReActAgent.from_tools(
    [multiply_tool, add_tool, space_facts_tool], verbose=True
)

This agent uses ReAct framework, which allows the model to reason and act by utilizing the given tools in a logical sequence. The agent is initialized with the tools we created, and the verbose=True flag will output detailed information on how the agent reasons and executes tasks.

Running the agent

Finally, let's run the agent in an interactive loop where it processes user queries until we exit.

while True:
    query = input("Query: ")

    if query == "/bye":
        exit()

    response = agent.chat(query)
    print(response)
    print("-" * 10)

How the RAG agent works?

  • When you ask a question related to the documents you ingested, the space_facts_tool i.e., the vector store tool retrieves the relevant information using the query_engine.
  • When you ask for calculations, the agent uses either add_tool or multiply_tool to perform those tasks.
  • The agent decides on-the-fly which tool to use based on the user query and provides the output.

The above is the detailed content of Building a simple RAG agent with LlamaIndex. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn