I have been reading a lot about RAG and AI Agents, but with the release of new models like DeepSeek V3 and DeepSeek R1, it seems that the possibility of building efficient RAG systems has significantly improved, offering better retrieval accuracy, enhanced reasoning capabilities, and more scalable architectures for real-world applications. The integration of more sophisticated retrieval mechanisms, enhanced fine-tuning options, and multi-modal capabilities are changing how AI agents interact with data. It raises questions about whether traditional RAG approaches are still the best way forward or if newer architectures can provide more efficient and contextually aware solutions.
Retrieval-augmented generation (RAG) systems have revolutionized the way AI models interact with data by combining retrieval-based and generative approaches to produce more accurate and context-aware responses. With the advent of DeepSeek R1, an open-source model known for its efficiency and cost-effectiveness, building an effective RAG system has become more accessible and practical. In this article, we are building an RAG system using DeepSeek R1.
Table of contents
- What is DeepSeek R1?
- Benefits of Using DeepSeek R1 for RAG System
- Steps to Build a RAG System Using DeepSeek R1
- Code to Build a RAG System Using DeepSeek R1
- Conclusion
What is DeepSeek R1?
DeepSeek R1 is an open-source AI model developed with the goal of providing high-quality reasoning and retrieval capabilities at a fraction of the cost of proprietary models like OpenAI’s offerings. It features an MIT license, making it commercially viable and suitable for a wide range of applications. Also, this powerful model, lets you see the CoT but the OpenAI o1 and o1-mini don’t show any reasoning token.
To know how DeepSeek R1 is challenging the OpenAI o1 model: DeepSeek R1 vs OpenAI o1: Which One is Faster, Cheaper and Smarter?
Benefits of Using DeepSeek R1 for RAG System
Building a Retrieval-Augmented Generation (RAG) system using DeepSeek-R1 offers several notable advantages:
1. Advanced Reasoning Capabilities: DeepSeek-R1 emulates human-like reasoning by analyzing and processing information step by step before reaching conclusions. This approach enhances the system’s ability to handle complex queries, particularly in areas requiring logical inference, mathematical reasoning, and coding tasks.
2. Open-Source Accessibility: Released under the MIT license, DeepSeek-R1 is fully open-source, allowing developers unrestricted access to its model. This openness facilitates customization, fine-tuning, and integration into various applications without the constraints often associated with proprietary models.
3. Competitive Performance: Benchmark tests indicate that DeepSeek-R1 performs on par with, or even surpasses, leading models like OpenAI’s o1 in tasks involving reasoning, mathematics, and coding. This level of performance ensures that an RAG system built with DeepSeek-R1 can deliver high-quality, accurate responses across diverse and challenging queries.
4. Transparency in Thought Process: DeepSeek-R1 employs a “chain-of-thought” methodology, making its reasoning steps visible during inference. This transparency helps debug and refine the system while building user trust by providing clear insights into the decision-making process.
5. Cost-Effectiveness: The open-source nature of DeepSeek-R1 eliminates licensing fees, and its efficient architecture reduces computational resource requirements. These factors contribute to a more cost-effective solution for organizations looking to implement sophisticated RAG systems without incurring significant expenses.
Integrating DeepSeek-R1 into an RAG system provides a potent combination of advanced reasoning abilities, transparency, performance, and cost efficiency, making it a compelling choice for developers and organizations aiming to enhance their AI capabilities.
Steps to Build a RAG System Using DeepSeek R1
The script is a Retrieval-Augmented Generation (RAG) pipeline that:
- Loads and processes a PDF document by splitting it into pages and extracting text.
- Stores vectorized representations of the text in a database (ChromaDB).
- Retrieves relevant content using similarity search when a query is asked.
- Uses an LLM (DeepSeek model) to generate responses based on the retrieved text.
Install Prerequisites
- Download Ollama: Click here to download
- For Linux users:Run the following command in your terminal:
curl -fsSL https://ollama.com/install.sh | sh
after this pull the DeepSeek R1:1.5b using:
ollama pull deepseek-r1:1.5b
This will take a moment to download:
ollama pull deepseek-r1:1.5b pulling manifest pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB pulling 369ca498f347... 100% ▕████████████████▏ 387 B pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB pulling f4d24e9138dd... 100% ▕████████████████▏ 148 B pulling a85fe2a2e58e... 100% ▕████████████████▏ 487 B verifying sha256 digest writing manifest success
After doing this, open your Jupyter Notebook and start with the coding part:
1. Install Dependencies
Before running, the script installs the required Python libraries:
- langchain → A framework for building applications using Large Language Models (LLMs).
- langchain-openai → Provides integration with OpenAI services.
- langchain-community → Adds support for various document loaders and utilities.
- langchain-chroma → Enables integration with ChromaDB, a vector database.
2. Enter OpenAI API Key
To access OpenAI’s embedding model, the script prompts the user to securely enter their API key using getpass(). This prevents exposing credentials in plain text.
3. Set Up Environment Variables
The script stores the API key as an environment variable. This allows other parts of the code to access OpenAI services without hardcoding credentials, which improves security.
4. Initialize OpenAI Embeddings
The script initializes an OpenAI embedding model called "text-embedding-3-small". This model converts text into vector embeddings, which are high-dimensional numerical representations of the text’s meaning. These embeddings are later used to compare and retrieve similar content.
5. Load and Split a PDF Document
A PDF file (AgenticAI.pdf) is loaded and split into pages. Each page text is extracted, which allows for smaller and more manageable text chunks instead of processing the entire document as a single unit.
6. Create and Store a Vector Database
- The extracted text from the PDF is converted into vector embeddings.
- These embeddings are stored in ChromaDB, a high-performance vector database.
- The database uses cosine similarity, ensuring efficient retrieval of text with a high degree of semantic similarity.
7. Retrieve Similar Texts Using a Similarity Threshold
A retriever is created using ChromaDB, which:
- Searches for the top 3 most similar documents based on a given query.
- Filters results based on a similarity threshold of 0.3, meaning documents must have at least 30% similarity to qualify as relevant.
8. Query for Similar Documents
Two test queries are used:
- "What is the old capital of India?"
- No results were found, which indicates that the stored documents do not contain relevant information.
- "What is Agentic AI?"
- Successfully retrieves relevant text, demonstrating that the system can fetch meaningful context.
9. Build a RAG (Retrieval-Augmented Generation) Chain
The script sets up a RAG pipeline, which ensures that:
- Text retrieval happens before generating an answer.
- The model’s response is based strictly on retrieved content, preventing hallucinations.
- A prompt template is used to instruct the model to generate structured responses.
10. Load a Connection to an LLM (DeepSeek Model)
Instead of OpenAI’s GPT, the script loads DeepSeek-R1 (1.5B parameters), a powerful LLM optimized for retrieval-based tasks.
11. Create a RAG-Based Chain
LangChain’s Retrieval module is used to:
- Fetch relevant content from the vector database.
- Format a structured response using a prompt template.
- Generate a concise answer with the DeepSeek model.
12. Test the RAG Chain
The script runs a test query:
"Tell the Leaders’ Perspectives on Agentic AI"
The LLM generates a fact-based response strictly using the retrieved context.
The system retrieves relevant information from the database.
Code to Build a RAG System Using DeepSeek R1
Here’s the code:
Install OpenAI and LangChain dependencies
curl -fsSL https://ollama.com/install.sh | sh
Enter Open AI API Key
ollama pull deepseek-r1:1.5b
Setup Environment Variables
ollama pull deepseek-r1:1.5b pulling manifest pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB pulling 369ca498f347... 100% ▕████████████████▏ 387 B pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB pulling f4d24e9138dd... 100% ▕████████████████▏ 148 B pulling a85fe2a2e58e... 100% ▕████████████████▏ 487 B verifying sha256 digest writing manifest success
Open AI Embedding Models
!pip install langchain==0.3.11 !pip install langchain-openai==0.2.12 !pip install langchain-community==0.3.11 !pip install langchain-chroma==0.1.4
Create a Vector DB and persist on the disk
from getpass import getpass OPENAI_KEY = getpass('Enter Open AI API Key: ')
Similarity with Threshold Retrieval
import os os.environ['OPENAI_API_KEY'] = OPENAI_KEY
from langchain_openai import OpenAIEmbeddings openai_embed_model = OpenAIEmbeddings(model='text-embedding-3-small')
from langchain_community.document_loaders import PyPDFLoader loader = PyPDFLoader('AgenticAI.pdf') pages = loader.load_and_split() texts = [doc.page_content for doc in pages] from langchain_chroma import Chroma chroma_db = Chroma.from_texts( texts=texts, collection_name='db_docs', collection_metadata={"hnsw:space": "cosine"}, # Set distance function to cosine embedding=openai_embed_model )
Build a RAG Chain
similarity_threshold_retriever = chroma_db.as_retriever(search_type="similarity_score_threshold",search_kwargs={"k": 3,"score_threshold": 0.3}) query = "what is the old capital of India?" top3_docs = similarity_threshold_retriever.invoke(query) top3_docs
Load Connection to LLM
[]
LangChain Syntax for RAG Chain
query = "What is Agentic AI?" top3_docs = similarity_threshold_retriever.invoke(query) top3_docs
Checkout our detailed articles on DeepSeek working and comparison with similar models:
- DeepSeek R1- OpenAI’s o1 Biggest Competitor is HERE!
- Building AI Application with DeepSeek-V3
- DeepSeek-V3 vs GPT-4o vs Llama 3.3 70B
- DeepSeek V3 vs GPT-4o: Which is Better?
- DeepSeek R1 vs OpenAI o1: Which One is Better?
- How to Access DeepSeek Janus Pro 7B?
Conclusion
Building a RAG system using DeepSeek R1 provides a cost-effective and powerful way to enhance document retrieval and response generation. With its open-source nature and strong reasoning capabilities, it is a great alternative to proprietary solutions. Businesses and developers can leverage its flexibility to create AI-driven applications tailored to their needs.
Want to build applications using DeepSeek? Checkout our Free DeepSeek Course today!
The above is the detailed content of How to Build a RAG System Using DeepSeek R1?. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.
