Home > Article > Technology peripherals > In-depth analysis, step by step to build your chatbot using GPT
Chatting with ChatGPT is fun and informative - you can explore some new ideas by chatting with it. But these are more casual use cases, and the novelty quickly wears off, especially once one realizes that it can produce hallucinations.
How to use ChatGPT in a more efficient way? After OpenAI releases the GPT3.5 series of APIs, you can do much more than just chat. QA (Question and Answer) is a very effective use case for businesses and individuals - ask a bot about your own files/data using natural language and it can answer quickly by retrieving information from the file and generating a response. Use it for customer support, comprehensive user research, personal knowledge management, and more.
Ask the bot questions related to your files. Image generated using the stable diffusion method.
This article will explore how to build a Q&A chatbot based on your own data, including why some methods don't work, and a step-by-step guide on how to use llama-index and the GPT API to build a document Q&A chatbot in an efficient way.
(If you just want to know how to build a Q&A chatbot, you can skip directly to the "Building a Document Q&A Chatbot Step by Step" section)
When ChatGPT comes out When working, you can think of using it as an assistant in your work, thereby saving your time and energy.
The first thing that comes to mind is to use your own data to fine-tune the GPT model to achieve this goal. However, fine-tuning costs quite a bit of money and requires a large data set with examples. It's also impossible to fine-tune every time a file changes. The more critical point is that fine-tuning cannot make the model "know" all the information in the document. Instead, it must teach the model a new skill. Therefore, fine-tuning is not a good idea for (multi-)document quality assurance.
The second method is to do prompt engineering by providing context in the prompt. For example, instead of asking the question directly, you can append the original document content before the actual question. But the attention of the GPT model is limited - it can only accept a few thousand words in the hint (about 4000 tokens or 3000 words). With thousands of customer feedback emails and hundreds of product documents, it's impossible to give it all the context in a prompt. Passing a long context to the API is also expensive since pricing is based on the number of tokens used.
I will ask you questions based on the following context: — Start of Context — YOUR DOCUMENT CONTENT — End of Context— My question is: “What features do users want to see in the app?”
Since the prompt has a limit on the number of input tags, I came up with an idea to solve the problem: first use an algorithm to search the document and pick out relevant excerpts, and then only use these relevant phrases. The context is passed to the GPT model together with the problem. In the process, a simple and convenient gpt-index library (now renamed LlamaIndex) needs to be used.
Extract the relevant parts from the file and feed them back to the prompt.
In the next section, a step-by-step tutorial will be given to build a Q&A chatbot on your own data using LlamaIndex and GPT.
In this section, we will use LlamaIndex and GPT (text-davinci-003) to build a Q&A chatbot based on the existing document, so You can ask questions about the document in natural language and get answers from the chatbot.
Before starting this tutorial, you need to make some preparations:
The workflow is very simple and only requires a few steps:
What LlamaIndex does is convert the raw document data into a vector index, which is very efficient for querying. It will use this index to find the most relevant parts based on the similarity of the query and data. It will then insert the retrieved content into the prompt it will send to GPT so that GPT has the context to answer the question.
You need to install the library first. Just run the following command on Terminal or Google Colab Notebook. These commands will install both LlamaIndex and OpenAI.
!pip install llama-index !pip install openai
The next step is to import these libraries in python and set the OpenAI API key in a new .py file.
# 导入必要的库 from llama_index import GPTSimpleVectorIndex, Document, SimpleDirectoryReader import os os.environ['OPENAI_API_KEY'] = 'sk-YOUR-API-KEY'
After you have installed the required libraries and imported them, you will need to build an index of your documents.
To load a document, you can use the SimpleDirectoryReader method provided by LllamaIndex, or you can load it from a string.
# 从一个目录中加载 documents = SimpleDirectoryReader('your_directory').load_data() # 从字符串中加载,假设将数据保存为字符串text1,text2,... text_list = [text1, text2, ...] documents = [Document(t) for t in text_list]
LlamaIndex还提供各种数据连接器,包括Notion、Asana、Google Drive、Obsidian等。可以在https://llamahub.ai/找到可用的数据连接器。
加载完文档后,就可以用以下方法简单地构建索引了:
# 构建一个简单的向量索引 index = GPTSimpleVectorIndex(documents)
如果想保存索引并加载它以便将来使用,可以使用以下方法:
# 将索引保存在`index.json`文件中 index.save_to_disk('index.json') # 从保存的`index.json`文件中加载索引 index = GPTSimpleVectorIndex.load_from_disk('index.json')
查询索引很简单:
# 查询索引 response = index.query("What features do users want to see in the app?") print(response)
一个回应的例子。
然后就可以得到答案了。在幕后,LlamaIndex将接收提示,在索引中搜索相关块,并将提示和相关块传递给GPT。
上面的步骤只是展示了使用LlamaIndex和GPT回答问题的一个非常简单的入门用法。但可以做得比这更多。事实上,可以配置LlamaIndex来使用不同的大型语言模型(LLM),为不同的任务使用不同类型的索引,用一个新的索引来更新现有的索引,等等。如果有兴趣,可以在https://gpt-index.readthedocs.io/en/latest/index.html,阅读他们的文档。
本文中展示了如何结合使用GPT和LlamaIndex来构建一个文档问答聊天机器人。虽然GPT(和其他LLM)本身就很强大,但如果把它与其他工具、数据或流程结合起来,它的力量也会被大大增强。
The above is the detailed content of In-depth analysis, step by step to build your chatbot using GPT. For more information, please follow other related articles on the PHP Chinese website!