Home > Article > Technology peripherals > What are the differences between llama, alpaca, vicuña and ChatGPT? Evaluation of seven large-scale ChatGPT models
Large-scale language models (LLM) are becoming popular all over the world. One of their important applications is chatting, and they are used in question and answer, customer service and many other aspects. However, chatbots are notoriously difficult to evaluate. Exactly under what circumstances these models are best used is not yet clear. Therefore, the assessment of LLM is very important.
Previously, a Medium blogger named Marco Tulio Ribeiro conducted some complex tasks on Vicuna-13B, MPT-7b-Chat and ChatGPT 3.5 test. The results show that Vicuna is a viable alternative to ChatGPT (3.5) for many tasks, while MPT is not yet ready for real-world use.
Recently, CMU associate professor Graham Neubig conducted a detailed evaluation of seven existing chatbots, produced an open source tool for automatic comparison, and finally formed an evaluation report.
In this report, the evaluator shows the preliminary evaluation and comparison results of some chatbots. The goal is to make it easier for people to understand all the recent open source models and the current status of API-based models.
Specifically, the reviewers created a new open source toolkit, Zeno Build, for evaluating LLM. The toolkit combines: (1) a unified interface for using open source LLM via Hugging Face or the online API; (2) an online interface for browsing and analyzing results using Zeno, and (3) metrics for SOTA evaluation of text using Critique.
Specific results to participate: https://zeno-ml-chatbot-report.hf .space/
The following is a summary of the evaluation results:
Model Overview
The reviewerused DSTC11 Customer Service Dataset. DSTC11 is a dataset from the Dialogue Systems Technology Challenge that aims to support more informative and engaging task-oriented conversations by leveraging subjective knowledge in comment posts.
The DSTC11 data set contains multiple subtasks, such as multi-turn dialogue, multi-domain dialogue, etc. For example, one of the subtasks is a multi-turn dialogue based on movie reviews, where the dialogue between the user and the system is designed to help the user find movies that suit their tastes. They tested the following
7 models:
For all models, the reviewer used the default parameter settings. These include a temperature of 0.3, a context window of 4 previous conversation turns, and a standard prompt: "You are a chatbot tasked with making small-talk with people."
Evaluation Metrics
Evaluators evaluate these models based on how closely their output resembles human customer service responses . This is done using the metrics provided by the Critique toolbox:
They also measured length ratio, dividing the length of the output by the length of the gold standard human reply, to measure whether the chatbot was verbose.
Further analysis
In order to dig deeper into the results, the reviewer used Zeno’s analysis interface. Specifically using its report generator, which segments examples based on position in the conversation (beginning, early, middle, and late) and the gold standard length of human responses (short, medium, long), use its explore interface to view Automatically score poor examples and better understand where each model fails.
What is the overall performance of the model?
According to all these metrics, gpt-3.5-turbo is the clear winner; Vicuna is the open source winner; GPT-2 and LLaMa are not very good, indicating that directly in the chat The importance of training.
These rankings also roughly match those of lmsys chat arena, which uses human A/B testing to compare models, but Zeno Build’s The results were obtained without any human scoring.
Regarding output length, the output of gpt3.5-turbo is much verbose than other models, and it seems that models tuned in the chat direction generally give verbose output .
Accuracy of Gold Standard Response Length
Next, the reviewer uses the Zeno report UI to dig deeper. First, they measured accuracy separately by the length of human responses. They classified responses into three categories: short (≤35 characters), medium (36-70 characters), and long (≥71 characters) and evaluated their accuracy individually.
gpt-3.5-turbo and Vicuna maintain accuracy even in longer dialogue rounds, while the other models suffer from a decrease in accuracy.
#The next question is how important is the context window size? The reviewers conducted experiments with Vicuna, and the context window ranged from 1-4 previous discourses. When they increased the context window, model performance increased, indicating that larger context windows are important.
The results show that longer context is especially important in the middle and later parts of the conversation, because these positions There are not so many templates for the reply, and it relies more on what has been said before.
#When trying to generate the gold standard shorter output (probably because there is more ambiguity) , more context is especially important.
How important is the prompt?
The reviewer tried 5 different prompts, 4 of which are universal and the other one is specifically tailored for customer service chat tasks in the insurance field:
In general, using these prompts, the reviewers did not detect significant differences caused by different prompts, but the "cynical" chatbot was slightly worse, and the tailor-made "insurance" chatbot Overall slightly better.
The difference brought by different prompts is especially obvious in the first turn of the dialogue, which shows that Prompts are most important when there is little other context to exploit.
Finally, the reviewer used Zeno’s exploration UI to try to pass gpt-3.5 -turbo finds possible errors. Specifically, they looked at all examples with low chrf (
Failure of Probe
Sometimes the model cannot Probe (detect) more information when actually needed. For example, the model is not yet perfect in handling numbers (the phone number must be 11 digits, and the length of the number given by the model is not consistent with the answer. match). This can be alleviated by modifying prompt to remind the model of the required length of certain information.
Duplicate content
Sometimes, the same content is repeated multiple times , for example, the chatbot said "thank you" twice here.
Answers that make sense, but not in the human way
Sometimes, This response is reasonable, just different from how humans would react.
The above are the evaluation results. Finally, the reviewers hope that this report will be helpful to researchers! If you continue to want to try other models, datasets, prompts, or other hyperparameter settings, you can jump to the chatbot example on the zeno-build repository to try it out.
The above is the detailed content of What are the differences between llama, alpaca, vicuña and ChatGPT? Evaluation of seven large-scale ChatGPT models. For more information, please follow other related articles on the PHP Chinese website!