LangChain: A powerful Python library for building, experimenting and analyzing language models and agents
Core points:
- LangChain is a Python library that simplifies the creation, experimentation and analysis of language models and agents, providing a wide range of functions for natural language processing.
- It allows the creation of multifunctional agents that are able to understand and generate text and can configure specific behaviors and data sources to perform various language-related tasks.
- LangChain provides three types of models: Large Language Model (LLM), Chat Model and Text Embedding Model, each providing unique functionality for language processing tasks.
- It also provides features such as segmenting large text into easy-to-manage blocks, linking multiple LLM functions through chains to perform complex tasks, and integrating with various LLM and AI services outside of OpenAI.
LangChain is a powerful Python library that enables developers and researchers to create, experiment, and analyze language models and agents. It provides natural language processing (NLP) enthusiasts with a rich set of features, from building custom models to efficient manipulating text data. In this comprehensive guide, we will dig into the basic components of LangChain and demonstrate how to take advantage of its power in Python.
Environment settings:
To learn this article, create a new folder and install LangChain and OpenAI using pip:
pip3 install langchain openai
Agents:
In LangChain, an agent is an entity that can understand and generate text. These agents can configure specific behaviors and data sources and are trained to perform various language-related tasks, making them a multi-functional tool for a variety of applications.
Create LangChain agent:
Agencies can be configured to use "tools" to collect the required data and develop a good response. Please see the example below. It uses the Serp API (an internet search API) to search for information related to a question or input and to respond. It also uses the llm-math tool to perform mathematical operations—for example, converting units or finding a percentage change between two values:
from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/ OpenAI.api_key = "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" llm = OpenAI(model="gpt-3.5-turbo", temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("How much energy did wind turbines produce worldwide in 2022?")
As you can see, after completing all the basic imports and initialization of LLM (llm = OpenAI(model="gpt-3.5-turbo", temperature=0)), the code uses tools = load_tools(["serpapi", "llm-math"], llm=llm) Load the tools required for the agent to work. It then uses the initialize_agent function to create an agent, provide it with the specified tool, and provides it with a ZERO_SHOT_REACT_DESCRIPTION description, which means it will not remember the previous problem.
Agency test example 1:
Let's test this agent with the following input:
<code>"How much energy did wind turbines produce worldwide in 2022?"</code>
As you can see, it uses the following logic:
- Search for "wind turbine energy production worldwide 2022" using Serp Internet Search API
- The best results for analysis
- Get any relevant numbers
- Use the llm-math tool to convert 906 GW to Joule because we are asking for energy, not power
Agency Test Example 2:
LangChain agent is not limited to searching the Internet. We can connect almost any data source (including our own) to the LangChain agent and ask questions about the data. Let's try to create an agent trained on a CSV dataset.
Download this Netflix movie and TV show dataset from SHIVAM BANSAL on Kaggle and move it to your directory. Now add this code to a new Python file:
pip3 install langchain openai
This code calls the create_csv_agent function and uses the netflix_titles.csv dataset. The following figure shows our test.
As shown above, its logic is to look for all occurrences of "Christian Bale" in the cast column.
We can also create a Pandas DataFrame agent like this:
from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/ OpenAI.api_key = "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" llm = OpenAI(model="gpt-3.5-turbo", temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("How much energy did wind turbines produce worldwide in 2022?")
If we run it, we will see the result as shown below.
These are just some examples. We can use almost any API or dataset with LangChain.
Models:
There are three types of models in LangChain: Large Language Model (LLM), Chat Model and Text Embedding Model. Let's explore each type of model with some examples.
Large Language Model:
LangChain provides a way to use large language models in Python to generate text output based on text input. It is not as complex as the chat model and is best suited for simple input-output language tasks. Here is an example using OpenAI:
<code>"How much energy did wind turbines produce worldwide in 2022?"</code>
As shown above, it uses the gpt-3.5-turbo model to generate output for the provided input ("Come up with a rap name for Matt Nikonorov"). In this example, I set the temperature to 0.9 to make the LLM more creative. It came up with “MC MegaMatt.” I gave it a 9/10 mark.
Chat Model:
It's fun to get the LLM model to come up with rap names, but if we want more complex answers and conversations, we need to use the chat model to improve our skills. Technically, how is the chat model different from a large language model? In the words of the LangChain document:
The chat model is a variant of the large language model. Although chat models use large language models in the background, they use slightly different interfaces. They do not use the "text input, text output" API, but use "chat messages" as the interface for input and output.
This is a simple Python chat model script:
pip3 install langchain openai
As shown above, the code first sends a SystemMessage and tells the chatbot to be friendly and informal, and then it sends a HumanMessage and tells the chatbot to convince us that Djokovich is better than Federer.
If you run this chatbot model, you will see the results shown below.
Embeddings:
Embing provides a way to convert words and numbers in blocks of text into vectors that can then be associated with other words or numbers. This may sound abstract, so let's look at an example:
from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/ OpenAI.api_key = "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" llm = OpenAI(model="gpt-3.5-turbo", temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("How much energy did wind turbines produce worldwide in 2022?")
This will return a list of floating point numbers: [0.022762885317206383, -0.01276398915797472, 0.004815981723368168, -0.009435392916202545, 0.010824492201209068] . This is what embedding looks like.
Usage cases of embedded models:
If we want to train a chatbot or LLM to answer questions related to our data or specific text samples, we need to use embedding. Let's create a simple CSV file (embs.csv) with a "text" column containing three pieces of information:
<code>"How much energy did wind turbines produce worldwide in 2022?"</code>
Now, this is a script that will use embeds to get the question "Who was the tallest human ever?" and find the correct answer in the CSV file:
from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain.agents import create_csv_agent import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" agent = create_csv_agent( OpenAI(temperature=0), "netflix_titles.csv", verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, ) agent.run("In how many movies was Christian Bale casted")
If we run this code, we will see it output "Robert Wadlow was the tallest human ever". The code finds the correct answer by getting the embedding of each piece of information and finding the embedding that is most relevant to the question "Who was the tallest human ever?". Embedded power!
Chunks:
LangChain models cannot process large texts at the same time and use them to generate responses. This is where block and text segmentation come in. Let's look at two simple ways to split text data into blocks before feeding it to LangChain.
Segment blocks by character:
To avoid sudden interruptions in blocks, we can split the text by paragraph by splitting the text at each occurrence of a newline or double newline:
from langchain.agents import create_pandas_dataframe_agent from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain.llms import OpenAI import pandas as pd import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY" df = pd.read_csv("netflix_titles.csv") agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True) agent.run("In what year were the most comedy movies released?")
Recursive segmentation block:
If we want to strictly split text by characters of a certain length, we can use RecursiveCharacterTextSplitter:
from langchain.llms import OpenAI import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" llm = OpenAI(model="gpt-3.5-turbo", temperature=0.9) print(llm("Come up with a rap name for Matt Nikonorov"))
Block size and overlap:
When looking at the example above, you may want to know exactly what the block size and overlapping parameters mean, and how they affect performance. This can be explained in two ways:
- Block size determines the number of characters in each block. The larger the block size, the more data there is in the block, the longer it takes LangChain to process it and generate the output, and vice versa.
- Block overlap is the content that shares information between blocks so that they share some context. The higher the block overlap, the more redundant our blocks are, the lower the block overlap, the less context shared between blocks. Typically, a good block overlap is 10% to 20% of the block size, although the desired block overlap varies by different text types and use cases.
Chains:
Chapters are basically multiple LLM functions linked together to perform more complex tasks that cannot be accomplished through simple LLM input-> output. Let's look at a cool example:
pip3 install langchain openai
This code enters two variables into its prompts and develops a creative answer (temperature=0.9). In this example, we ask it to come up with a good title for a horror movie about mathematics. The output after running this code is "The Calculating Curse", but this doesn't really show the full functionality of the chain.
Let's look at a more practical example:
from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 获取你的Serp API密钥:https://serpapi.com/ OpenAI.api_key = "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" llm = OpenAI(model="gpt-3.5-turbo", temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("How much energy did wind turbines produce worldwide in 2022?")
This code may seem confusing, so let's explain it step by step.
This code reads a short biography of Nas (Hip Hop Artist) and extracts the following values from the text and formats them as JSON objects:
- Artist's name
- Artist's music genre
- The artist's first album
- The release year of the artist's first album
In the prompt, we also specified "Make sure to answer in the correct format" so that we always get the output in JSON format. Here is the output of this code:
<code>"How much energy did wind turbines produce worldwide in 2022?"</code>
By providing the JSON pattern to the create_structed_output_chain function, we make the chain put its output into the JSON format.
Beyond OpenAI:
Although I have been using the OpenAI model as an example of different functions of LangChain, it is not limited to the OpenAI model. We can use LangChain with many other LLM and AI services. (This is the complete list of LangChain's integrated LLMs.)
For example, we can use Cohere with LangChain. This is the documentation for the LangChain Cohere integration, but to provide a practical example, after installing Cohere using pip3 install cohere, we can write a simple Q&A code using LangChain and Cohere as follows:
from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain.agents import create_csv_agent import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" agent = create_csv_agent( OpenAI(temperature=0), "netflix_titles.csv", verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, ) agent.run("In how many movies was Christian Bale casted")
The above code produces the following output:
from langchain.agents import create_pandas_dataframe_agent from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain.llms import OpenAI import pandas as pd import os os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY" df = pd.read_csv("netflix_titles.csv") agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True) agent.run("In what year were the most comedy movies released?")
Conclusion:
In this guide, you have seen different aspects and functions of LangChain. Once you have mastered this knowledge, you can use LangChain's capabilities to perform NLP work, whether you are a researcher, developer or enthusiast.
You can find a repository on GitHub that contains all the images and Nas.txt files in this article.
I wish you a happy coding and experimenting with LangChain in Python!
The above is the detailed content of A Complete Guide to LangChain in Python. For more information, please follow other related articles on the PHP Chinese website!

Python's flexibility is reflected in multi-paradigm support and dynamic type systems, while ease of use comes from a simple syntax and rich standard library. 1. Flexibility: Supports object-oriented, functional and procedural programming, and dynamic type systems improve development efficiency. 2. Ease of use: The grammar is close to natural language, the standard library covers a wide range of functions, and simplifies the development process.

Python is highly favored for its simplicity and power, suitable for all needs from beginners to advanced developers. Its versatility is reflected in: 1) Easy to learn and use, simple syntax; 2) Rich libraries and frameworks, such as NumPy, Pandas, etc.; 3) Cross-platform support, which can be run on a variety of operating systems; 4) Suitable for scripting and automation tasks to improve work efficiency.

Yes, learn Python in two hours a day. 1. Develop a reasonable study plan, 2. Select the right learning resources, 3. Consolidate the knowledge learned through practice. These steps can help you master Python in a short time.

Python is suitable for rapid development and data processing, while C is suitable for high performance and underlying control. 1) Python is easy to use, with concise syntax, and is suitable for data science and web development. 2) C has high performance and accurate control, and is often used in gaming and system programming.

The time required to learn Python varies from person to person, mainly influenced by previous programming experience, learning motivation, learning resources and methods, and learning rhythm. Set realistic learning goals and learn best through practical projects.

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

WebStorm Mac version
Useful JavaScript development tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 English version
Recommended: Win version, supports code prompts!

Zend Studio 13.0.1
Powerful PHP integrated development environment