Home >Backend Development >Python Tutorial >Part Building Powerful Chains and Agents in LangChain

Part Building Powerful Chains and Agents in LangChain

PHPz
PHPzOriginal
2024-07-31 12:03:231099browse

Part Building Powerful Chains and Agents in LangChain

Building Powerful Chains and Agents in LangChain

In this comprehensive guide, we'll dive deep into the world of LangChain, focusing on constructing powerful chains and agents. We'll cover everything from understanding the fundamentals of chains to combining them with large language models (LLMs) and introducing sophisticated agents for autonomous decision-making.

1. Understanding Chains

1.1 What are Chains in LangChain?

Chains in LangChain are sequences of operations or tasks that process data in a specific order. They allow for modular and reusable workflows, making it easier to handle complex data processing and language tasks. Chains are the building blocks for creating sophisticated AI-driven systems.

1.2 Types of Chains

LangChain offers several types of chains, each suited for different scenarios:

  1. Sequential Chains: These chains process data in a linear order, where the output of one step serves as the input for the next. They're ideal for straightforward, step-by-step processes.

  2. Map/Reduce Chains: These chains involve mapping a function over a set of data and then reducing the results to a single output. They're great for parallel processing of large datasets.

  3. Router Chains: These chains direct inputs to different sub-chains based on certain conditions, allowing for more complex, branching workflows.

1.3 Creating Custom Chains

Creating custom chains involves defining specific operations or functions that will be part of the chain. Here's an example of a custom sequential chain:

from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate

class CustomChain:
    def __init__(self, llm):
        self.llm = llm
        self.steps = []

    def add_step(self, prompt_template):
        prompt = PromptTemplate(template=prompt_template, input_variables=["input"])
        chain = LLMChain(llm=self.llm, prompt=prompt)
        self.steps.append(chain)

    def execute(self, input_text):
        for step in self.steps:
            input_text = step.run(input_text)
        return input_text

# Initialize the chain
llm = OpenAI(temperature=0.7)
chain = CustomChain(llm)

# Add steps to the chain
chain.add_step("Summarize the following text in one sentence: {input}")
chain.add_step("Translate the following English text to French: {input}")

# Execute the chain
result = chain.execute("LangChain is a powerful framework for building AI applications.")
print(result)

This example creates a custom chain that first summarizes an input text and then translates it to French.

2. Combining Chains and LLMs

2.1 Integrating Chains with Prompts and LLMs

Chains can be seamlessly integrated with prompts and LLMs to create more powerful and flexible systems. Here’s an example:

from langchain import PromptTemplate, LLMChain
from langchain.llms import OpenAI
from langchain.chains import SimpleSequentialChain

llm = OpenAI(temperature=0.7)

# First chain: Generate a topic
first_prompt = PromptTemplate(
    input_variables=["subject"],
    template="Generate a random {subject} topic:"
)
first_chain = LLMChain(llm=llm, prompt=first_prompt)

# Second chain: Write a paragraph about the topic
second_prompt = PromptTemplate(
    input_variables=["topic"],
    template="Write a short paragraph about {topic}:"
)
second_chain = LLMChain(llm=llm, prompt=second_prompt)

# Combine the chains
overall_chain = SimpleSequentialChain(chains=[first_chain, second_chain], verbose=True)

# Run the chain
result = overall_chain.run("science")
print(result)

This example creates a chain that generates a random science topic and then writes a paragraph about it.

2.2 Debugging and Optimizing Chain-LLM Interactions

To debug and optimize chain-LLM interactions, you can use the verbose parameter and custom callbacks:

from langchain.callbacks import StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate

class CustomHandler(StdOutCallbackHandler):
    def on_llm_start(self, serialized, prompts, **kwargs):
        print(f"LLM started with prompt: {prompts[0]}")

    def on_llm_end(self, response, **kwargs):
        print(f"LLM finished with response: {response.generations[0][0].text}")

llm = OpenAI(temperature=0.7, callbacks=[CustomHandler()])
template = "Tell me a {adjective} joke about {subject}."
prompt = PromptTemplate(input_variables=["adjective", "subject"], template=template)
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)

result = chain.run(adjective="funny", subject="programming")
print(result)

This example uses a custom callback handler to provide detailed information about the LLM's input and output.

3. Introducing Agents

3.1 What are Agents in LangChain?

Agents in LangChain are autonomous entities that can use tools and make decisions to accomplish tasks. They combine LLMs with external tools to solve complex problems, allowing for more dynamic and adaptable AI systems.

3.2 Built-in Agents and Their Capabilities

LangChain provides several built-in agents, such as the zero-shot-react-description agent:

from langchain.agents import load_tools, initialize_agent, AgentType
from langchain.llms import OpenAI

llm = OpenAI(temperature=0)
tools = load_tools(["wikipedia", "llm-math"], llm=llm)

agent = initialize_agent(
    tools, 
    llm, 
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

result = agent.run("What is the square root of the year Plato was born?")
print(result)

This example creates an agent that can use Wikipedia and perform mathematical calculations to answer complex questions.

3.3 Creating Custom Agents

You can create custom agents by defining your own tools and agent classes. This allows for highly specialized agents tailored to specific tasks or domains.

Here’s an example of a custom agent:

from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent
from langchain.prompts import StringPromptTemplate
from langchain import OpenAI, SerpAPIWrapper, LLMChain
from typing import List, Union
from langchain.schema import AgentAction, AgentFinish
import re

# Define custom tools
search = SerpAPIWrapper()
tools = [
    Tool(
        name="Search",
        func=search.run,
        description="Useful for answering questions about current events"
    )
]

# Define a custom prompt template
template = """Answer the following questions as best you can:

{input}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought: To answer this question, I need to search for current information.
{agent_scratchpad}"""

class CustomPromptTemplate(StringPromptTemplate):
    template: str
    tools: List[Tool]

    def format(self, **kwargs) -> str:
        intermediate_steps = kwargs.pop("intermediate_steps")
        thoughts = ""
        for action, observation in intermediate_steps:
            thoughts += action.log
            thoughts += f"\nObservation: {observation}\nThought: "
        kwargs["agent_scratchpad"] = thoughts
        kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
        return self.template.format(**kwargs)

prompt = CustomPromptTemplate(
    template=template,
    tools=tools,
    input_variables=["input", "intermediate_steps"]
)

# Define a custom output parser
class CustomOutputParser:
    def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
        if "Final Answer:" in llm_output:
            return AgentFinish(
                return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
                log=llm_output,
            )

        action_match = re.search(r"Action: (\w+)", llm_output, re.DOTALL)
        action_input_match = re.search(r"Action Input: (.*)", llm_output, re.DOTALL)

        if not action_match or not action_input_match:
            raise ValueError(f"Could not parse LLM output: `{llm_output}`")

        action = action_match.group(1).strip()
        action_input = action_input_match.group(1).strip(" ").strip('"')

        return AgentAction(tool=action, tool_input=action_input, log=llm_output)

# Create the custom output parser
output_parser = CustomOutputParser()

# Define the LLM chain
llm = OpenAI(temperature=0)
llm_chain = LLMChain(llm=llm, prompt=prompt)

# Define the custom agent
agent = LLMSingleActionAgent(
    llm_chain=llm_chain,
    output_parser=output_parser,
    stop=["\nObservation:"],
    allowed_tools=[tool.name for tool in tools]
)

# Create an agent executor
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, , verbose=True)
# Run the agent
result = agent_executor.run(“What’s the latest news about AI?”)

print(result)

Conclusion

LangChain's chains and agents offer robust capabilities for constructing sophisticated AI-driven systems. When integrated with large language models (LLMs), they enable the creation of adaptable, smart applications designed to tackle a variety of tasks. As you progress through your LangChain journey, feel free to experiment with diverse chain types, agent setups, and custom modules to fully harness the framework's potential.

The above is the detailed content of Part Building Powerful Chains and Agents in LangChain. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn