分散的人工智慧提示是否會減慢您的開發進程?了解 LangChain Hub 如何徹底改變您的工作流程,為 JavaScript 工程師提供無縫且有效率的即時管理。
想像管理一個項目,其中關鍵資訊分散在文件中。令人沮喪,對吧?這就是處理 AI 提示的開發人員面臨的現實。 LangChain Hub 集中即時管理,轉變工作流程,就像 GitHub 進行程式碼協作一樣。
LangChain Hub 提供了一個直覺的介面,用於上傳、瀏覽、拉取、協作、版本控制和組織提示。這不僅簡化了工作流程,還促進了協作和創新,使其成為必不可少的工具。
LangChain Hub 是一款功能強大的工具,專為 JavaScript 開發人員設計,可有效率地集中、管理和協作處理 AI 提示。
探索其他開發者的提示,獲得新的想法和解決方案。學習新技術、改進現有提示並創造協作環境。
LangChain Hub 將您所有的 AI 提示集中在一個屋簷下,消除分散文件和碎片儲存的混亂。將所有內容整齊地組織在一處,管理您的提示從未如此簡單。
由於其直覺的設計,導航 LangChain Hub 變得輕而易舉。上傳、瀏覽和管理提示非常簡單,可提高您的工作效率並最大限度地減少學習該工具所花費的時間。
LangChain Hub 讓您可以輕鬆地與團隊分享和協作提示。這種無縫共享促進創新和集體解決問題,使團隊合作更加高效和有效。
使用 LangChain Hub 的版本控制,永遠不會失去您的即時迭代。您可以輕鬆恢復到先前的版本或監控一段時間內的更改,確保您始終可以存取提示的最佳版本。
使用進階搜尋和篩選選項立即找到您需要的提示。您可以按用例、類型、語言和模型過濾提示,確保您快速存取最相關的資源。這些功能可以節省您的時間並增強您的工作流程,使即時管理更有效率並根據您的特定專案需求進行客製化。
輕鬆根據您的特定項目要求自訂提示。 LangChain Hub 的客製化選項可確保您的提示無縫融入您的開發流程,並適應您的獨特需求。
我們來建立一個項目,使用LangChain Hub中的提示模板來凸顯它的價值。
我們將首先使用我為入門:LangSmith for JavaScript LLM Apps 一文建立的示範專案。雖然我鼓勵您閱讀該文章,但並不需要您繼續閱讀。
LANGCHAIN_PROJECT="langsmith-demo" # Name of your LangSmith project LANGCHAIN_TRACING_V2=true # Enable advanced tracing features LANGCHAIN_API_KEY=<your-api-key> # Your LangSmith API key OPENAI_API_KEY=<your-openai-api-key> # Your OpenAI API key
示範應用程式以《辛普森家庭》中伯恩斯先生的聲音回答問題「法國的首都是哪裡?」。為此,我們使用以下提示:
Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided. The tone of your response should be that of The Simpsons' Mr. Burns. <context> {context} </context>
提示目前已硬編碼在應用程式中,目前可以管理。然而,在實際應用中,這種方法可能變得難以管理。當我們在鏈中添加更多步驟和多個提示時,它很快就會變得混亂且難以維護。因此,我們將提示移至LangChain Hub。
如果您按照上述步驟操作,您應該擁有一個 LangSmith 帳戶。
前往 smith.langchain.com/hub 並點擊「新提示」。
然後,您需要為提示指定名稱、設定可見性(預設私有)、描述、自述文件、用例、語言和模型。注意:所有者是“@kenzic”,這對您來說會有所不同。請參閱螢幕截圖了解值。
Once you've created your prompt, you'll want to select the prompt type. For this task, we'll select "Chat Prompt".
Create a "System" message with the value:
Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided. The tone of your response should be that of The Simpsons' Mr. Burns. <context> {context} </context>
Next, create a "Human" message with the value:
Please address the following inquiry:\n{input}
Before we commit this, we can test it out in the playground. To the right of the message chain, you will notice the section "Inputs" with the variables we specified in the messages. To confirm it's working as expected, I tested with the following:
context: The capital of France is Springfield. It was Paris but changed in 2024.
input: What is the capital of France
Once you have the Inputs defined, under Settings you'll want to select the model we're testing against. Select GPT-3.5-turbo. For this to work you'll need to add your OpenAI API key by clicking the "Secrets & API Keys" button. Great, now we're ready to test. Click the "Start" button and watch it generate the output. You should see something like:
Ah, yes, the capital of France, or should I say, Springfield! Paris may have been the capital in the past, but as of 2024, Springfield reigns supreme as the new capital of France. A change of this magnitude surely raises questions and eyebrows, but rest assured, the decision has been made and Springfield now holds the title of the capital of France. How utterly delightful!
Once we're happy with our prompt, we need to commit it. Simply click the "Commit" button!
Great, now that we have a finished prompt we'll want to update our code to reference it instead of the hardcoded prompt template.
First, we need to import the hub function to pull our template into our code:
import * as hub from "langchain/hub";
Next, let's delete the ChatPromptTemplate in the code and replace it with:
const answerGenerationChainPrompt = await hub.pull( "[YOURORG]/mr-burns-answer-prompt" );
Note: You can delete the ANSWER_CHAIN_SYSTEM_TEMPLATE variable too
Finally, let's test it out! run yarn start to execute the script. If everything works properly, you will see the output in the voice of Mr. Burns informing you the capital of France is Paris.
If you want to take it a step further, you can lock your prompts by the version. To do this, simply append a colon and the version number to the end of the name like so:
const answerGenerationChainPrompt = await hub.pull( "[YOURORG]/mr-burns-answer-prompt:[YOURVERSION]" ); // for me it looks like: const answerGenerationChainPrompt = await hub.pull( "kenzic/mr-burns-answer-prompt:d123dc92" );
That's it!
We've explored how LangChain Hub centralizes prompt management, enhances collaboration, and integrates into your workflow. To improve your efficiency with LangChain Hub, consider diving deeper into the customization and integration possibilities.
LangChain Hub is more than a tool; it's a catalyst for innovation and collaboration in AI development. Embrace this revolutionary platform and elevate your JavaScript LLM applications to new heights.
Throughout this guide, we tackled how to:
Keep building and experimenting, and I'm excited to see how you'll push the boundaries of what's possible with AI and LangChain Hub!
To stay connected and share your journey, feel free to reach out through the following channels:
以上是使用 LangSmith Hub 改變您的工作流程:JavaScript 工程師的遊戲規則改變者的詳細內容。更多資訊請關注PHP中文網其他相關文章!