分散的人工智能提示是否会减慢您的开发进程?了解 LangChain Hub 如何彻底改变您的工作流程,为 JavaScript 工程师提供无缝且高效的即时管理。
想象一下管理一个项目,其中关键信息分散在文件中。令人沮丧,对吧?这就是处理 AI 提示的开发人员面临的现实。 LangChain Hub 集中即时管理,转变工作流程,就像 GitHub 进行代码协作一样。
LangChain Hub 提供了一个直观的界面,用于上传、浏览、拉取、协作、版本控制和组织提示。这不仅简化了工作流程,还促进了协作和创新,使其成为必不可少的工具。
LangChain Hub 是一款功能强大的工具,专为 JavaScript 开发人员设计,可以高效地集中、管理和协作处理 AI 提示。
探索其他开发者的提示,获得新的想法和解决方案。学习新技术、改进现有提示并营造协作环境。
LangChain Hub 将您所有的 AI 提示集中在一个屋檐下,消除分散文件和碎片存储的混乱。将所有内容整齐地组织在一处,管理您的提示从未如此简单。
由于其直观的设计,导航 LangChain Hub 变得轻而易举。上传、浏览和管理提示非常简单,可提高您的工作效率并最大限度地减少学习该工具所花费的时间。
LangChain Hub 让您可以轻松地与团队分享和协作提示。这种无缝共享促进创新和集体解决问题,使团队合作更加高效和有效。
使用 LangChain Hub 的版本控制,永远不会丢失您的即时迭代。您可以轻松恢复到以前的版本或监控一段时间内的更改,确保您始终可以访问提示的最佳版本。
使用高级搜索和过滤选项立即找到您需要的提示。您可以按用例、类型、语言和模型过滤提示,确保您快速访问最相关的资源。这些功能可以节省您的时间并增强您的工作流程,使即时管理更加高效并根据您的特定项目需求进行定制。
轻松根据您的具体项目要求定制提示。 LangChain Hub 的定制选项可确保您的提示无缝融入您的开发流程,适应您的独特需求。
我们来建立一个项目,使用LangChain Hub中的提示模板来凸显它的价值。
我们将首先使用我为文章入门:LangSmith for JavaScript LLM Apps 创建的演示项目。虽然我鼓励您阅读该文章,但并不需要您继续阅读。
LANGCHAIN_PROJECT="langsmith-demo" # Name of your LangSmith project LANGCHAIN_TRACING_V2=true # Enable advanced tracing features LANGCHAIN_API_KEY=<your-api-key> # Your LangSmith API key OPENAI_API_KEY=<your-openai-api-key> # Your OpenAI API key
演示应用程序以《辛普森一家》中伯恩斯先生的声音回答问题“法国的首都是哪里?”。为此,我们使用以下提示:
Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided. The tone of your response should be that of The Simpsons' Mr. Burns. <context> {context} </context>
提示目前已硬编码在应用程序中,目前可以管理。然而,在实际应用中,这种方法可能变得难以管理。当我们向链中添加更多步骤和多个提示时,它很快就会变得混乱且难以维护。因此,我们将提示移至LangChain Hub。
如果您按照上述步骤操作,您应该拥有一个 LangSmith 帐户。
前往 smith.langchain.com/hub 并点击“新提示”。
然后,您需要为提示指定名称、设置可见性(默认私有)、描述、自述文件、用例、语言和模型。注意:所有者是“@kenzic”,这对您来说会有所不同。请参阅屏幕截图了解值。
Once you've created your prompt, you'll want to select the prompt type. For this task, we'll select "Chat Prompt".
Create a "System" message with the value:
Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided. The tone of your response should be that of The Simpsons' Mr. Burns. <context> {context} </context>
Next, create a "Human" message with the value:
Please address the following inquiry:\n{input}
Before we commit this, we can test it out in the playground. To the right of the message chain, you will notice the section "Inputs" with the variables we specified in the messages. To confirm it's working as expected, I tested with the following:
context: The capital of France is Springfield. It was Paris but changed in 2024.
input: What is the capital of France
Once you have the Inputs defined, under Settings you'll want to select the model we're testing against. Select GPT-3.5-turbo. For this to work you'll need to add your OpenAI API key by clicking the "Secrets & API Keys" button. Great, now we're ready to test. Click the "Start" button and watch it generate the output. You should see something like:
Ah, yes, the capital of France, or should I say, Springfield! Paris may have been the capital in the past, but as of 2024, Springfield reigns supreme as the new capital of France. A change of this magnitude surely raises questions and eyebrows, but rest assured, the decision has been made and Springfield now holds the title of the capital of France. How utterly delightful!
Once we're happy with our prompt, we need to commit it. Simply click the "Commit" button!
Great, now that we have a finished prompt we'll want to update our code to reference it instead of the hardcoded prompt template.
First, we need to import the hub function to pull our template into our code:
import * as hub from "langchain/hub";
Next, let's delete the ChatPromptTemplate in the code and replace it with:
const answerGenerationChainPrompt = await hub.pull( "[YOURORG]/mr-burns-answer-prompt" );
Note: You can delete the ANSWER_CHAIN_SYSTEM_TEMPLATE variable too
Finally, let's test it out! run yarn start to execute the script. If everything works properly, you will see the output in the voice of Mr. Burns informing you the capital of France is Paris.
If you want to take it a step further, you can lock your prompts by the version. To do this, simply append a colon and the version number to the end of the name like so:
const answerGenerationChainPrompt = await hub.pull( "[YOURORG]/mr-burns-answer-prompt:[YOURVERSION]" ); // for me it looks like: const answerGenerationChainPrompt = await hub.pull( "kenzic/mr-burns-answer-prompt:d123dc92" );
That's it!
We've explored how LangChain Hub centralizes prompt management, enhances collaboration, and integrates into your workflow. To improve your efficiency with LangChain Hub, consider diving deeper into the customization and integration possibilities.
LangChain Hub is more than a tool; it's a catalyst for innovation and collaboration in AI development. Embrace this revolutionary platform and elevate your JavaScript LLM applications to new heights.
Throughout this guide, we tackled how to:
Keep building and experimenting, and I'm excited to see how you'll push the boundaries of what's possible with AI and LangChain Hub!
To stay connected and share your journey, feel free to reach out through the following channels:
以上是使用 LangSmith Hub 改变您的工作流程:JavaScript 工程师的游戏规则改变者的详细内容。更多信息请关注PHP中文网其他相关文章!