在這本綜合指南中,我們將深入探討LangChain的世界,並專注於建立強大的鍊和代理。我們將涵蓋從理解鏈的基礎知識到將其與大型語言模型 (LLM) 結合以及引入用於自主決策的複雜代理的所有內容。
LangChain中的鍊是按特定順序處理資料的操作或任務序列。它們允許模組化和可重複使用的工作流程,從而更輕鬆地處理複雜的資料處理和語言任務。鍊是創建複雜的人工智慧驅動系統的構建塊。
LangChain提供多種類型的鏈,每種類型適合不同的場景:
順序鏈:這些鏈以線性順序處理數據,其中一個步驟的輸出作為下一步的輸入。它們非常適合簡單、逐步的流程。
Map/Reduce 鏈:這些鏈涉及將函數對應到一組數據,然後將結果縮減為單一輸出。它們非常適合併行處理大型資料集。
路由器鏈:這些鏈會根據特定條件將輸入直接輸入到不同的子鏈,從而允許更複雜的分支工作流程。
建立自訂鏈涉及定義將成為鏈一部分的特定操作或功能。這是自訂順序鏈的範例:
from langchain.chains import LLMChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate class CustomChain: def __init__(self, llm): self.llm = llm self.steps = [] def add_step(self, prompt_template): prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=self.llm, prompt=prompt) self.steps.append(chain) def execute(self, input_text): for step in self.steps: input_text = step.run(input_text) return input_text # Initialize the chain llm = OpenAI(temperature=0.7) chain = CustomChain(llm) # Add steps to the chain chain.add_step("Summarize the following text in one sentence: {input}") chain.add_step("Translate the following English text to French: {input}") # Execute the chain result = chain.execute("LangChain is a powerful framework for building AI applications.") print(result)
此範例建立一個自訂鏈,該鏈首先匯總輸入文本,然後將其翻譯為法語。
Chains 可以與提示和 LLM 無縫集成,以創建更強大、更靈活的系統。這是一個例子:
from langchain import PromptTemplate, LLMChain from langchain.llms import OpenAI from langchain.chains import SimpleSequentialChain llm = OpenAI(temperature=0.7) # First chain: Generate a topic first_prompt = PromptTemplate( input_variables=["subject"], template="Generate a random {subject} topic:" ) first_chain = LLMChain(llm=llm, prompt=first_prompt) # Second chain: Write a paragraph about the topic second_prompt = PromptTemplate( input_variables=["topic"], template="Write a short paragraph about {topic}:" ) second_chain = LLMChain(llm=llm, prompt=second_prompt) # Combine the chains overall_chain = SimpleSequentialChain(chains=[first_chain, second_chain], verbose=True) # Run the chain result = overall_chain.run("science") print(result)
此範例建立一個鏈,該鏈會產生隨機科學主題,然後編寫有關它的段落。
要調試和優化鏈-LLM 交互,您可以使用詳細參數和自訂回調:
from langchain.callbacks import StdOutCallbackHandler from langchain.chains import LLMChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate class CustomHandler(StdOutCallbackHandler): def on_llm_start(self, serialized, prompts, **kwargs): print(f"LLM started with prompt: {prompts[0]}") def on_llm_end(self, response, **kwargs): print(f"LLM finished with response: {response.generations[0][0].text}") llm = OpenAI(temperature=0.7, callbacks=[CustomHandler()]) template = "Tell me a {adjective} joke about {subject}." prompt = PromptTemplate(input_variables=["adjective", "subject"], template=template) chain = LLMChain(llm=llm, prompt=prompt, verbose=True) result = chain.run(adjective="funny", subject="programming") print(result)
此範例使用自訂回呼處理程序來提供有關 LLM 輸入和輸出的詳細資訊。
浪鏈中的代理是自治實體,可以使用工具並做出決策來完成任務。他們將法學碩士與外部工具結合來解決複雜的問題,從而實現更具動態性和適應性的人工智慧系統。
LangChain 提供了多種內建代理,例如 Zero-shot-react-description 代理:
from langchain.agents import load_tools, initialize_agent, AgentType from langchain.llms import OpenAI llm = OpenAI(temperature=0) tools = load_tools(["wikipedia", "llm-math"], llm=llm) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) result = agent.run("What is the square root of the year Plato was born?") print(result)
此範例建立一個可以使用維基百科並執行數學計算來回答複雜問題的代理程式。
您可以透過定義自己的工具和代理類別來建立自訂代理程式。這允許針對特定任務或領域定制高度專業化的代理。
這是自訂代理程式的範例:
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent from langchain.prompts import StringPromptTemplate from langchain import OpenAI, SerpAPIWrapper, LLMChain from typing import List, Union from langchain.schema import AgentAction, AgentFinish import re # Define custom tools search = SerpAPIWrapper() tools = [ Tool( name="Search", func=search.run, description="Useful for answering questions about current events" ) ] # Define a custom prompt template template = """Answer the following questions as best you can: {input} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought: To answer this question, I need to search for current information. {agent_scratchpad}""" class CustomPromptTemplate(StringPromptTemplate): template: str tools: List[Tool] def format(self, **kwargs) -> str: intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " kwargs["agent_scratchpad"] = thoughts kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) return self.template.format(**kwargs) prompt = CustomPromptTemplate( template=template, tools=tools, input_variables=["input", "intermediate_steps"] ) # Define a custom output parser class CustomOutputParser: def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: if "Final Answer:" in llm_output: return AgentFinish( return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) action_match = re.search(r"Action: (\w+)", llm_output, re.DOTALL) action_input_match = re.search(r"Action Input: (.*)", llm_output, re.DOTALL) if not action_match or not action_input_match: raise ValueError(f"Could not parse LLM output: `{llm_output}`") action = action_match.group(1).strip() action_input = action_input_match.group(1).strip(" ").strip('"') return AgentAction(tool=action, tool_input=action_input, log=llm_output) # Create the custom output parser output_parser = CustomOutputParser() # Define the LLM chain llm = OpenAI(temperature=0) llm_chain = LLMChain(llm=llm, prompt=prompt) # Define the custom agent agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=[tool.name for tool in tools] ) # Create an agent executor agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, , verbose=True) # Run the agent result = agent_executor.run(“What’s the latest news about AI?”) print(result)
LangChain的鍊和代理程式為建構複雜的人工智慧驅動系統提供了強大的能力。當與大型語言模型 (LLM) 整合時,它們可以創建適應性強的智慧應用程序,旨在解決各種任務。當您在 LangChain 之旅中不斷進步時,請隨意嘗試不同的鏈類型、代理設定和自訂模組,以充分利用該框架的潛力。
以上是浪鏈部分建構強大的鍊和代理的詳細內容。更多資訊請關注PHP中文網其他相關文章!