Home >Backend Development >Python Tutorial >A Complete Guide to LangChain in Python

A Complete Guide to LangChain in Python

尊渡假赌尊渡假赌尊渡假赌
尊渡假赌尊渡假赌尊渡假赌Original
2025-02-10 08:29:10624browse

LangChain: A powerful Python library for building, experimenting and analyzing language models and agents

A Complete Guide to LangChain in Python

Core points:

  • LangChain is a Python library that simplifies the creation, experimentation and analysis of language models and agents, providing a wide range of functions for natural language processing.
  • It allows the creation of multifunctional agents that are able to understand and generate text and can configure specific behaviors and data sources to perform various language-related tasks.
  • LangChain provides three types of models: Large Language Model (LLM), Chat Model and Text Embedding Model, each providing unique functionality for language processing tasks.
  • It also provides features such as segmenting large text into easy-to-manage blocks, linking multiple LLM functions through chains to perform complex tasks, and integrating with various LLM and AI services outside of OpenAI.

LangChain is a powerful Python library that enables developers and researchers to create, experiment, and analyze language models and agents. It provides natural language processing (NLP) enthusiasts with a rich set of features, from building custom models to efficient manipulating text data. In this comprehensive guide, we will dig into the basic components of LangChain and demonstrate how to take advantage of its power in Python.

Environment settings:

To learn this article, create a new folder and install LangChain and OpenAI using pip:

<code class="language-bash">pip3 install langchain openai</code>

Agents:

In LangChain, an agent is an entity that can understand and generate text. These agents can configure specific behaviors and data sources and are trained to perform various language-related tasks, making them a multi-functional tool for a variety of applications.

Create LangChain agent:

Agencies can be configured to use "tools" to collect the required data and develop a good response. Please see the example below. It uses the Serp API (an internet search API) to search for information related to a question or input and to respond. It also uses the llm-math tool to perform mathematical operations—for example, converting units or finding a percentage change between two values:

<code class="language-python">from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")</code>

As you can see, after completing all the basic imports and initialization of LLM (llm = OpenAI(model="gpt-3.5-turbo", temperature=0)), the code uses tools = load_tools(["serpapi", "llm-math"], llm=llm) Load the tools required for the agent to work. It then uses the initialize_agent function to create an agent, provide it with the specified tool, and provides it with a ZERO_SHOT_REACT_DESCRIPTION description, which means it will not remember the previous problem.

Agency test example 1:

Let's test this agent with the following input:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

A Complete Guide to LangChain in Python

As you can see, it uses the following logic:

  • Search for "wind turbine energy production worldwide 2022" using Serp Internet Search API
  • The best results for analysis
  • Get any relevant numbers
  • Use the llm-math tool to convert 906 GW to Joule because we are asking for energy, not power

Agency Test Example 2:

LangChain agent is not limited to searching the Internet. We can connect almost any data source (including our own) to the LangChain agent and ask questions about the data. Let's try to create an agent trained on a CSV dataset.

Download this Netflix movie and TV show dataset from SHIVAM BANSAL on Kaggle and move it to your directory. Now add this code to a new Python file:

<code class="language-bash">pip3 install langchain openai</code>

This code calls the create_csv_agent function and uses the netflix_titles.csv dataset. The following figure shows our test.

A Complete Guide to LangChain in Python

As shown above, its logic is to look for all occurrences of "Christian Bale" in the cast column.

We can also create a Pandas DataFrame agent like this:

<code class="language-python">from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")</code>

If we run it, we will see the result as shown below.

A Complete Guide to LangChain in Python A Complete Guide to LangChain in Python

These are just some examples. We can use almost any API or dataset with LangChain.

Models:

There are three types of models in LangChain: Large Language Model (LLM), Chat Model and Text Embedding Model. Let's explore each type of model with some examples.

Large Language Model:

LangChain provides a way to use large language models in Python to generate text output based on text input. It is not as complex as the chat model and is best suited for simple input-output language tasks. Here is an example using OpenAI:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

As shown above, it uses the gpt-3.5-turbo model to generate output for the provided input ("Come up with a rap name for Matt Nikonorov"). In this example, I set the temperature to 0.9 to make the LLM more creative. It came up with “MC MegaMatt.” I gave it a 9/10 mark.

Chat Model:

It's fun to get the LLM model to come up with rap names, but if we want more complex answers and conversations, we need to use the chat model to improve our skills. Technically, how is the chat model different from a large language model? In the words of the LangChain document:

The chat model is a variant of the large language model. Although chat models use large language models in the background, they use slightly different interfaces. They do not use the "text input, text output" API, but use "chat messages" as the interface for input and output.

This is a simple Python chat model script:

<code class="language-bash">pip3 install langchain openai</code>

As shown above, the code first sends a SystemMessage and tells the chatbot to be friendly and informal, and then it sends a HumanMessage and tells the chatbot to convince us that Djokovich is better than Federer.

If you run this chatbot model, you will see the results shown below.

A Complete Guide to LangChain in Python

Embeddings:

Embing provides a way to convert words and numbers in blocks of text into vectors that can then be associated with other words or numbers. This may sound abstract, so let's look at an example:

<code class="language-python">from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")</code>

This will return a list of floating point numbers: [0.022762885317206383, -0.01276398915797472, 0.004815981723368168, -0.009435392916202545, 0.010824492201209068] . This is what embedding looks like.

Usage cases of embedded models:

If we want to train a chatbot or LLM to answer questions related to our data or specific text samples, we need to use embedding. Let's create a simple CSV file (embs.csv) with a "text" column containing three pieces of information:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

Now, this is a script that will use embeds to get the question "Who was the tallest human ever?" and find the correct answer in the CSV file:

<code class="language-python">from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_csv_agent
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

agent = create_csv_agent(
    OpenAI(temperature=0),
    "netflix_titles.csv",
    verbose=True,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)

agent.run("In how many movies was Christian Bale casted")</code>

If we run this code, we will see it output "Robert Wadlow was the tallest human ever". The code finds the correct answer by getting the embedding of each piece of information and finding the embedding that is most relevant to the question "Who was the tallest human ever?". Embedded power!

Chunks:

LangChain models cannot process large texts at the same time and use them to generate responses. This is where block and text segmentation come in. Let's look at two simple ways to split text data into blocks before feeding it to LangChain.

Segment blocks by character:

To avoid sudden interruptions in blocks, we can split the text by paragraph by splitting the text at each occurrence of a newline or double newline:

<code class="language-python">from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.llms import OpenAI
import pandas as pd
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
df = pd.read_csv("netflix_titles.csv")

agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

agent.run("In what year were the most comedy movies released?")</code>

Recursive segmentation block:

If we want to strictly split text by characters of a certain length, we can use RecursiveCharacterTextSplitter:

<code class="language-python">from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

llm = OpenAI(model="gpt-3.5-turbo", temperature=0.9)
print(llm("Come up with a rap name for Matt Nikonorov"))</code>

Block size and overlap:

When looking at the example above, you may want to know exactly what the block size and overlapping parameters mean, and how they affect performance. This can be explained in two ways:

  • Block size determines the number of characters in each block. The larger the block size, the more data there is in the block, the longer it takes LangChain to process it and generate the output, and vice versa.
  • Block overlap is the content that shares information between blocks so that they share some context. The higher the block overlap, the more redundant our blocks are, the lower the block overlap, the less context shared between blocks. Typically, a good block overlap is 10% to 20% of the block size, although the desired block overlap varies by different text types and use cases.

Chains:

Chapters are basically multiple LLM functions linked together to perform more complex tasks that cannot be accomplished through simple LLM input-> output. Let's look at a cool example:

<code class="language-bash">pip3 install langchain openai</code>

This code enters two variables into its prompts and develops a creative answer (temperature=0.9). In this example, we ask it to come up with a good title for a horror movie about mathematics. The output after running this code is "The Calculating Curse", but this doesn't really show the full functionality of the chain.

Let's look at a more practical example:

<code class="language-python">from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")</code>

This code may seem confusing, so let's explain it step by step.

This code reads a short biography of Nas (Hip Hop Artist) and extracts the following values ​​from the text and formats them as JSON objects:

  • Artist's name
  • Artist's music genre
  • The artist's first album
  • The release year of the artist's first album

In the prompt, we also specified "Make sure to answer in the correct format" so that we always get the output in JSON format. Here is the output of this code:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

By providing the JSON pattern to the create_structed_output_chain function, we make the chain put its output into the JSON format.

Beyond OpenAI:

Although I have been using the OpenAI model as an example of different functions of LangChain, it is not limited to the OpenAI model. We can use LangChain with many other LLM and AI services. (This is the complete list of LangChain's integrated LLMs.)

For example, we can use Cohere with LangChain. This is the documentation for the LangChain Cohere integration, but to provide a practical example, after installing Cohere using pip3 install cohere, we can write a simple Q&A code using LangChain and Cohere as follows:

<code class="language-python">from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_csv_agent
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

agent = create_csv_agent(
    OpenAI(temperature=0),
    "netflix_titles.csv",
    verbose=True,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)

agent.run("In how many movies was Christian Bale casted")</code>

The above code produces the following output:

<code class="language-python">from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.llms import OpenAI
import pandas as pd
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
df = pd.read_csv("netflix_titles.csv")

agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

agent.run("In what year were the most comedy movies released?")</code>

Conclusion:

In this guide, you have seen different aspects and functions of LangChain. Once you have mastered this knowledge, you can use LangChain's capabilities to perform NLP work, whether you are a researcher, developer or enthusiast.

You can find a repository on GitHub that contains all the images and Nas.txt files in this article.

I wish you a happy coding and experimenting with LangChain in Python!

The above is the detailed content of A Complete Guide to LangChain in Python. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn