search
HomeTechnology peripheralsAI3 Advanced Strategies for Retrievers in LangChain

LangChain Retrieval: Efficient and Flexible Access to Documents

The searcher in the LangChain framework plays a crucial role, providing a flexible interface for returning documents based on unstructured queries. Unlike vector databases, the searcher does not need to store documents; its main function is to retrieve relevant information from a large amount of data. While vector databases can serve as the basis for retriers, there are various types of retriers, each customizable for a specific use case.

3 Advanced Strategies for Retrievers in LangChain

Learning Objectives

  • Understand the key role of searchers in LangChain and achieve efficient and flexible document retrieval to meet various application needs.
  • Learn how LangChain's searcher (from vector databases to multi-queries and context compression) simplifies access to relevant information.
  • This guide covers various searcher types in LangChain and explains how each searcher is customized to optimize query processing and data access.
  • Dig into LangChain's searcher capabilities and check out tools for enhancing document retrieval accuracy and relevance.
  • Learn how LangChain's custom retriever adapts to specific needs, enabling developers to create highly responsive applications.
  • Explore LangChain's search techniques that integrate language models and vector databases to obtain more accurate and efficient search results.

Table of contents

  • Learning Objectives
  • Searcher in LangChain
  • Use vector database as a searcher
  • Using MultiQueryRetriever
    • Build a sample vector database
    • Simple usage
    • Custom Tips
  • How to search using context compression
    • Overview of context compression
  • Create a custom searcher
    • interface
    • Example
  • in conclusion
  • Frequently Asked Questions

Searcher in LangChain

The searcher receives a string query as input and outputs a list of Document objects. This mechanism allows applications to efficiently acquire relevant information, enabling advanced interactions with large data sets or knowledge bases.

  1. Use vector database as a searcher

The vector database retriever efficiently retrieves documents by utilizing vector representations. It acts as a lightweight wrapper for vector storage classes, conforms to the searcher interface and uses methods such as similarity search and maximum marginal correlation (MMR).

To create a retriever from a vector database, use the .as_retriever method. For example, for a Pinecone vector database based on customer reviews, we can set it up as follows:

 from langchain_community.document_loaders import CSVLoader
from langchain_community.vectorstores import Pinecone
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter

loader = CSVLoader("customer_reviews.csv")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone.from_documents(texts, embeddeds)
retriever = vectorstore.as_retriever()

We can now use this searcher to query related comments:

 docs = retriever.invoke("What do customers think about the battery life?")

By default, the searcher uses similarity search, but we can specify MMR as the search type:

 retriever = vectorstore.as_retriever(search_type="mmr")

Additionally, we can pass parameters such as similarity score thresholds, or use top-k to limit the number of results:

 retriever = vectorstore.as_retriever(search_kwargs={"k": 2, "score_threshold": 0.6})

Output:

3 Advanced Strategies for Retrievers in LangChain

Using a vector database as a searcher can enhance document retrieval by ensuring efficient access to relevant information.

  1. Using MultiQueryRetriever

MultiQueryRetriever enhances distance-based vector database retrieval by addressing common limitations such as changes in query wording and suboptimal embedding. Using large language model (LLM) automation prompt adjustments, multiple queries can be generated for a given user input from different angles. This process allows retrieval of relevant documents for each query and combine the results to produce a richer set of potential documents.

Build a sample vector database

To demonstrate MultiQueryRetriever, let's create a vector store using the product description from the CSV file:

 from langchain_community.document_loaders import CSVLoader
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter

# Load product description loader = CSVLoader("product_descriptions.csv")
data = loader.load()

# Split text into blocks text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=50)
documents = text_splitter.split_documents(data)

# Create vector storage embeddings = OpenAIEmbeddings()
vectordb = FAISS.from_documents(documents, embeddeds)

Simple usage

To use MultiQueryRetriever, specify the LLM used for query generation:

 from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_openai import ChatOpenAI

question = "What features do customers value in smartphones?"
llm = ChatOpenAI(temperature=0)
retriever_from_llm = MultiQueryRetriever.from_llm(
    retriever=vectordb.as_retriever(), llm=llm
)
unique_docs = retriever_from_llm.invoke(question)
len(unique_docs) # Number of unique documents retrieved

Output:

3 Advanced Strategies for Retrievers in LangChain

MultiQueryRetriever generates multiple queries, enhancing the diversity and relevance of retrieved documents.

Custom Tips

To adjust the generated query, you can create a custom PromptTemplate and an output parser:

 from langchain_core.output_parsers import BaseOutputParser
from langchain_core.prompts import PromptTemplate
from typing import List

# Custom output parser class LineListOutputParser(BaseOutputParser[List[str]]):
    def parse(self, text: str) -> List[str]:
        return list(filter(None, text.strip().split("\n")))

output_parser = LineListOutputParser()

# Custom prompts for query generation QUERY_PROMPT = PromptTemplate(
    input_variables=["question"],
    template="""Generate five different versions of the question: {question}"""
)

llm_chain = QUERY_PROMPT | llm | output_parser

# Initialize the retriever retriever = MultiQueryRetriever(
    retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key="lines"
)

unique_docs = retriever.invoke("What features do customers value in smartphones?")
len(unique_docs) # Number of unique documents retrieved

Output

3 Advanced Strategies for Retrievers in LangChain

Using MultiQueryRetriever can achieve a more efficient search process, ensuring diverse and comprehensive results based on user queries.

  1. How to search using context compression

Retrieving relevant information from a large collection of documents can be challenging, especially when data ingestion is not known about the specific query the user will make. Often, valuable insights are hidden in lengthy documents, resulting in inefficient and costly calls to the language model (LLM) while providing less responsiveness than ideal. Context compression solves this problem by improving the search process, ensuring that relevant information is returned only based on the user's query. This compression includes reducing the content of a single document and filtering out irrelevant documents.

Overview of context compression

The context compression searcher runs by integrating the basic searcher with the document compressor. This method does not return the document in its entirety, but compresses the document based on the context provided by the query. This compression includes reducing the content of a single document and filtering out irrelevant documents.

Implementation steps

  1. Initialize the basic searcher: First set up a normal vector storage searcher. For example, consider a news article about climate change policy:
 from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter

# Load and split the article documents = TextLoader("climate_change_policy.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)

# Initialize vector storage retriever retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()
  1. Execute an initial query: Execute a query to view the results returned by the basic retriever, which may include relevant and unrelated information.
 docs = retriever.invoke("What actions are being proposed to combat climate change?")
  1. Enhanced search using context compression: wrap the basic searcher with ContextualCompressionRetriever, and extract relevant content using LLMChainExtractor:
 from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain_openai import OpenAI

llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(
    base_compressor=compressor, base_retriever=retriever
)

# Perform compressed_docs = compression_retriever.invoke("What actions are being proposed to combat climate change?")

View the compressed results: ContextualCompressionRetriever processes the initial document and extracts only relevant information related to the query, thereby optimizing the response.

Create a custom searcher

Retrieval is essential in many LLM applications. Its task is to obtain relevant documents based on user queries. These documents are formatted as LLM prompts, enabling them to generate appropriate responses.

interface

To create a custom retriever, extend the BaseRetriever class and implement the following:

method describe Required/Optional
_get_relevant_documents Search documents related to the query. Required
_aget_relevant_documents Asynchronous implementation for native support. Optional

Inherited from BaseRetriever will provide standard Runnable functionality to your retriever.

Example

Here is an example of a simple retriever:

 from typing import List
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever

class ToyRetriever(BaseRetriever):
    """A simple retriever that returns the first k documents containing the user's query."""
    documents: List[Document]
    k: int

    def _get_relevant_documents(self, query: str) -> List[Document]:
        matching_documents = [doc for doc in self.documents if query.lower() in doc.page_content.lower()]
        return matching_documents[:self.k]

# Example usage documents = [
    Document("Dogs are great companies.", {"type": "dog"}),
    Document("Cats are independent pets.", {"type": "cat"}),
]

retriever = ToyRetriever(documents=documents, k=1)
result = retriever.invoke("dog")
print(result[0].page_content)

Output

3 Advanced Strategies for Retrievers in LangChain

This implementation provides a simple way to retrieve documents based on user input, illustrating the core functionality of a custom searcher in LangChain.

in conclusion

In the LangChain framework, the searcher is a powerful tool that can effectively access relevant information in various document types and use cases. By understanding and implementing different searcher types (such as vector storage retrievers, MultiQueryRetriever, and context compression retrievers), developers can customize document retrievals based on the specific needs of their applications.

Each retriever type has unique advantages, from using MultiQueryRetriever to using context compression to optimizing responses. Additionally, creating a custom searcher provides greater flexibility to accommodate special requirements that built-in options may not meet. Mastering these search techniques allows developers to build more efficient and responsive applications that leverage the potential of language models and large data sets.

Frequently Asked Questions

Q1. What is the main function of the searcher in LangChain? A1. The main function of the searcher is to obtain relevant documents based on the query. This helps the application effectively access necessary information in large datasets without storing documents themselves.

Q2. How is the difference between a searcher and a vector database? A2. A vector database is used to store documents in a way that allows similarity-based searches, and the searcher is an interface for searching documents based on queries. Although a vector database can be part of a searcher, the searcher's task focuses on obtaining relevant information.

Q3. What is MultiQueryRetriever and how does it work? A3. MultiQueryRetriever improves search results by creating multiple variants of queries using language models. This method captures a wider range of documents that may be related to different wording issues, thereby enhancing the richness of the search information.

Q4. Why is context compression important? A4. Context compression optimizes search results by reducing the content of the document to only relevant parts and filtering out unrelated information. This is especially useful in large collections, as the complete documentation may contain unrelated details, saving resources and providing a more centralized response.

Q5. What are the requirements for setting MultiQueryRetriever? A5. To set up MultiQueryRetriever, you need a vector database for document storage, a language model (LLM) for generating multiple query perspectives, and optional custom prompt templates to further optimize query generation.

The above is the detailed content of 3 Advanced Strategies for Retrievers in LangChain. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools