Home  >  Article  >  Technology peripherals  >  ChatGPT sharing-How to develop an LLM application

ChatGPT sharing-How to develop an LLM application

PHPz
PHPzforward
2023-04-12 21:43:042404browse

1​Background​

ChatGPT has caused a huge shock in the industry, and all walks of life are discussing large language models and general artificial intelligence. AI has experienced more than fifty years of development and is now in a critical period of horizontal development of the industrial structure. This change stems from the paradigm shift in the field of NLP, which has evolved from "pre-training and fine-tuning" to "pre-training, prompting, and prediction". In this new model, downstream tasks adapt to the pre-trained model, making a large model suitable for multiple tasks. This change has laid the foundation for the horizontal division of labor in the AI ​​industry. Large language models have become infrastructure. Prompt Engineering companies have emerged one after another, focusing on connecting users and models. The division of labor in the AI ​​industry has initially taken shape, including underlying infrastructure (cloud service providers), large models, Prompt Engineering platforms and terminal applications. As the AI ​​industry changes, developers can make full use of large language models (LLM) and Prompt Engineering to develop innovative applications.

2 Application under Prompt-Ops

Currently we need to develop an application based on LLM. What is the biggest engineering problem we face?

  • The big language model cannot be connected to the Internet and cannot obtain the latest information
  • The big language model does not have our private data and cannot answer vertical questions
  • The big language model The open API (text-davinci-003) does not have as excellent contextual capabilities as ChatGPT
  • The large language model cannot drive other tools.

2.1 Engineering frameworks such as Langchain solve these engineering problems

Take Langchain as an example to put it simply: LangChain is an encapsulation of LLM’s underlying capabilities and is a kind of Prompt Engineering or It's Prompt-Ops.

  • It can access various LLM services and abstract the calls of various large language models
  • It can create various PromptTemplates and implement customized Prompt templates
  • It can create chains to combine calls to PromptTemplate
  • It can call various tools to achieve things that GPT-3 is currently not good at, such as search/mathematics/linking private databases/Python code
  • It can use agents to drive LLM to determine which actions to take and in what order. Actions can be using a tool and observing its output, or it can be returned to the user.
  • It can realize the modeling of conversation history through its Memory module.

2.2 Some Langchain development examples

2.2.1 GPT combined with search

ChatGPT sharing-How to develop an LLM application

ChatGPT sharing-How to develop an LLM application

This is an example of comparing the demo developed using ChatGPT and LangChain. The input is "Who is Jay Chou's wife? What is her current age multiplied by 0.23?". It can be seen that the answer results of ChatGPT or GPT-3.5 are wrong because they do not have search capabilities. The API using LangChain combined with OpenAI's GPT-3.5 on the right outputs the correct result. It will gradually search for the correct information and get the correct result, and the intermediate process is automatically handled by the framework. I have no other operations except entering questions.

2.2.2 Convert natural language into Python code and correct errors by yourself

This is a very shocking example. In this process, it discovered that the function was not defined and reported an error. Correct yourself.

ChatGPT sharing-How to develop an LLM application

ChatGPT sharing-How to develop an LLM application

##2.2.3 Query NBA data using GPT-3 Statmuse Langchain

Fuzzy API composition: querying NBA stats with GPT-3 Statmuse Langchain

Use Langchain combined with sports data search sites to ask complex data questions and get accurate responses. For example: "What are the Boston Celtics' average defensive points per game this 2022-2023 NBA season? How does the percentage change compare to their average last season?"

ChatGPT sharing-How to develop an LLM application

2.2.4 Connect to Python REPL and open the browser to play music

A pretty sci-fi scene, I used Langchain to connect to the Python REPL tool, entered "play me a song", and it imported I installed the webBrowser package, called the code to open the browser, and played the song "never gonna give you up" for me

def pythonTool():
bash = BashProcess()
python_repl_util = Tool(
"Python REPL",
PythonREPL().run,
"""A Python shell. Use this to execute python commands. 
Input should be a valid python command.
If you expect output it should be printed out.""",
)
command_tool = Tool(
name="bash",
descriptinotallow="""A Bash shell. Use this to execute Bash commands. Input should be a valid Bash command.
If you expect output it should be printed out.""",
func=bash.run,
)
# math_tool = _get_llm_math(llm)
# search_tool = _get_serpapi()
tools = [python_repl_util, command_tool]
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
agent.run("给我播放一首音乐")

ChatGPT sharing-How to develop an LLM application

2.2.5   连接私有数据

连接私有数据对第三方企业做LLM应用来说非常重要。下面举几个例子

  • LangchainJs的文档,结合了Langchain:AI-powered search for LangchainJS Documentation,你可以直接问文档内容、技术细节。

ChatGPT sharing-How to develop an LLM application

  • 数据库产品 Supabase也做了同样的事情,将自己的文档与ChatGPT相连接,使得开发者可以很方便的询问/查找自己遇到的技术问题。https://supabase.com/docs

ChatGPT sharing-How to develop an LLM application

  • 政府信息问答的一个例子:Co-pilot for government

法律公文和政策条款一般都非常复杂繁琐,这个demo中将旧金山政府的信息用Langchain与GPT结合,做到询问其中细节能获得准确回复的效果。

ChatGPT sharing-How to develop an LLM application

> Entering new AgentExecutor chain...
I need to find out the size limit for a storage shed without a permit and then search for sheds that are smaller than that size.
Action: SF Building Codes QA System
Action Input: "What is the size limit for a storage shed without a permit in San Francisco?"
Observation: The size limit for a storage shed without a permit in San Francisco is 100 square feet (9.29 m2).


Thought:Now that I know the size limit, I can search for sheds that are smaller than 100 square feet.
Action: Google
Action Input: "Storage sheds smaller than 100 square feet"
Observation: Results 1 - 24 of 279 ...


Thought:I need to filter the Google search results to only show sheds that are smaller than 100 square feet and suitable for backyard storage.
Action: Google
Action Input: "Backyard storage sheds smaller than 100 square feet"
Thought:I have found several options for backyard storage sheds that are smaller than 100 square feet and do not require a permit. 
Final Answer: The size limit for a storage shed without a permit in San Francisco is 100 square feet. There are many options for backyard storage sheds that are smaller than 100 square feet and do not require a permit, including small sheds under 36 square feet and medium sheds between 37 and 100 square feet.

2.3   结合私有数据问答

LLM应用与私有数据交互非常重要,我看到无数人在问一些ChatGPT无法回答的问题了:问认不认识谁、问自己公司业务细节、问各种可能不包含在预训练数据集里的东西。这些都已用Langchain和LlaMaIndex来解决。试想一下,将私有数据与LLM相结合,将改变数据原有的访问方式,通过问答能很自然地获取到自己需要的信息,这是比当前的搜索/打标分类都要高效的数据交互方式。

2.3.1   如何构建一个基于私有数据的LLM问答系统

ChatGPT sharing-How to develop an LLM application

向量数据库现在看起来是构建LLM App中很关键的一个组件。首先 LLM 的预训练和微调过程不可能包含我们所期待的私有数据,因此如何将LLM关联到私有数据成为一个很关键的需求。而且LLM的“接口”-自然语言通常不是像Key-Value的映射那样精确地。而且在这一阶段我们希望LLM去理解我们的知识库,而不是简单的在其中搜索相同的字符串,我们希望询问关于我们知识库的细节,并给出一定理解后的答案(以及来源),这样匹配向量这样的搜索方式是一个非常合适且关键的解决方案。还有一个关键点是,LLM在每次调用是按token计费(即文本量),并且目前的接口的上下文有着4096 tokens的限制。,因此面对庞大的数据,我们也不可能将所有的数据一次性传给LLM。因此才有了第一张图那个流程图的结构。本地预先将我们私有的数据转成向量存在Qdrant里,用户问答时,将用户的问题转为向量,然后去Qdrant里进行搜索(相似性匹配)得到Top K个结果,然后将这些结果(注意这里的结果已经是自然语言了)传给LLM进行总结输出。

2.3.2   结合私有数据问答的抽象流程

这里使用Langchain社区博客的流程图为例

ChatGPT sharing-How to develop an LLM application

私有数据分割成小于LLM上下文的分块,创建向量后存入向量数据库

ChatGPT sharing-How to develop an LLM application

将问题计算向量后在向量数据库进行相似性搜索,算出相关性较高的top k个结果后拼接prompt送往LLM获得答案。

2.3.3 Important components

  • OpenAI Ada model: text-embedding-ada-002 model can quickly encode a 1536-dimensional vector. We can use this vector to calculate the similarity between texts sex.
  • Langchain / LLamaIndex: Langchain contains a variety of text splitters and document connectors to facilitate splitting files and indexing them in vector databases; LlamaIndex can load data from vector storage, similar to any other data connection device. This data can then be used in the LlamaIndex data structure.
  • Vector database, there are many options: Chroma / FAISS / Milvus / PGVector / Qdrant / Pinecone, etc.

2.3.4 OpenAI private deployment and cost issues

Let’s talk about the recent news about OpenAI private deployment. If Langchain is used for linking, facing huge private data, Using an embedding model (OpenAI's ada) to calculate the input problem vector, using vector databases such as Qdrant to manage private data vectors and vector searches, and using Langchain as the intermediate link can solve the problem, but the consumption of tokens cannot be ignored. cost issue. Private deployment fine-tuning may solve most of the previously mentioned problems. It may be that big, wealthy companies use Model instances and fine-tuning, while independent developers in small companies use frameworks such as Langchain. In the future, when OpenAI's LLM service capabilities overflow, Prompt may no longer be needed, and the functions of Langchain may even be included. The development and access of LLM applications may only require an interface call.

2.4 LLM application technology stack in 2023

2023 The latest technology stack used to simply build AI Demo:

  • Host: Vercel
  • Front-end: Next.js
  • Back-end: Vercel with flask
  • Database: Supabase
  • AI model: OpenAI / Replicate / Hugging Face
  • LLM framework layer : LangChain / LLaMaIndex
  • Vector storage/search: Pinecone / FAISS

2.5 Prompt-Ops The biggest problem at the moment

Some of the Prompt-Ops like Langchain Opposition to class tools: stream.thesephist.com The main problem is that in this class of tools/frameworks, using natural language as the connection between code and LLM, and using non-deterministic language itself as the control flow, is a bit crazy. Moreover, evaluating the model output effect itself is now a very troublesome task. There is no good solution. Many of them maintain a huge spreadsheet and rely on humans to evaluate it. (There are also plans to use LLM to evaluate LLM, which is still relatively early.) Therefore, there may still be a lot of work to be done before it is put into production and actually faces users rather than as a twitter demonstration.

Let’s talk in detail about the huge challenges faced in the testing process. If your product has a set of prompts that work well during the development stage, after it is handed over for testing, you may be able to identify problems by testing hundreds or thousands of them. Since the effect cannot be guaranteed, it will face great challenges to actually launch it to c-end users. And if you do not use fine-tuning services or model instances, if OpenAI updates the model, all prompts in your production environment may need to be retested for effects. Your prompts also need to be managed by version just like the code. Regardless of whether there are prompt changes or not, each version needs to be regression tested before going online. Without a good automated assessment solution, a large number of cases would need to be tested manually, which would consume a lot of manpower.

There are many good engineering solutions for developing LLM applications that combine private data. It is easy to run a demo with good results, but such an application still needs to be treated with great caution. After all, we are not just doing a project to demonstrate in front of social media or leaders. What is provided to the user for input is a dialog box. Natural language is so broad that even if you test tens of thousands of results, unexpected results may occur. After all, products like new bing and chatGPT will also be prompted for injection. Faced with this uncertainty, how to avoid it in engineering and how to cover it in testing are all issues that need to be solved for mature products or there is still a lot of work that can be done.

But I don’t think there is any need to completely deny this type of Prompt-Ops tool/framework. After all, many good demos can indeed be made to verify ideas at this stage.

3 Some possible product forms in the future

Let’s talk about the possible forms of LLM applications after the ChatGPT API is opened.

  • Conversation chat is the most intuitive application method, and the conversation history is managed on the API.
  • Virtual character chat, based on the basic dialogue chat, make some character definition prompts on the prefix_message of the API to achieve an effect similar to Character.ai. Going deeper may be as a game character, virtual person, XR assistant, etc.
  • A text-assisted writing tool similar to Notion. Currently, Notion and FlowUs have similar applications. In the future, the integration of publishers in various communities will also be a trend, reducing the threshold for user publishing and improving the quality of publishing.
  • Data summary tool implements Chat-Your-Data, provides document input to users, and allows users to chat with the data they provide. In essence, it only involves public data on the Internet and private data of users.
  • Chat-Your-Data for large enterprises. Each large company combines the private data of large enterprises to provide better services on the basis of its original business. For example, Dianping, which combines user reviews, can use "I want to go to a bar that plays neo-soul and R&B music." For example, our business details page can summarize all users' evaluations of this product and allow users to provide information about this product. Do a Q&A.
  • Integrate with government affairs, medical care, education and other fields, integrate online official websites of institutions and offline large screens to provide better citizen services.
  • Combined with other tools such as IFTTT or various private protocols, LLM can access more tools and systems, for example: IoT scenarios, Office Copilot.

LLM application is actually a new way of human-computer interaction, which allows users to communicate with our current system using natural language. Many applications can even be simplified to only a chat window.

4 Summary

At present, due to the high cost of general large model training/deployment, the conditions for industrial level division of labor are basically mature. There is no need for many large models in the world. The application of LLM will be an inevitable choice for small and medium-sized enterprises and individual developers. New forms of programming/engineering paradigms require engineers to learn and understand them in a timely manner. The current open source technology stack can already meet the needs of most products. You can try a quick demo to verify your ideas.

Reference:

  • https://blog.langchain.dev/tutorial-chatgpt-over-your-data/

Tutorial: ChatGPT Over Your Data

  • https://qdrant.tech/articles/langchain-integration/

Question Answering with LangChain and Qdrant without boilerplate

  • https://mp.weixin.qq.com/s/VZ6n4qlDx4bh41YvD1HqgQ

Atom Capital: In-depth discussion of the industrial changes brought about by ChatGPT

The above is the detailed content of ChatGPT sharing-How to develop an LLM application. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete