Home  >  Article  >  Technology peripherals  >  Is ChatGPT really a "generalist"? Yang Di and others gave it a thorough test.

Is ChatGPT really a "generalist"? Yang Di and others gave it a thorough test.

WBOY
WBOYforward
2023-04-13 21:25:01787browse

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

Paper link: ​https://arxiv.org/pdf/2302.06476.pdf​

Large-scale language models (LLM) have been proven to be able to solve a variety of natural language processing (NLP) tasks, and for a given downstream task, they do not rely on any training data and can achieve model tuning with the help of appropriate prompts. This ability to perform new tasks on command can be seen as an important step towards general artificial intelligence.

Although current LLM achieves good performance in some cases, it is still prone to various errors in zero-shot learning. Additionally, the format of the prompt can have a substantial impact. For example, by adding "Let’s think step by step" to the prompt, the model performance can be significantly improved​. These limitations illustrate that current LLMs are not truly universal language systems.

Recently, the ChatGPT LLM released by OpenAI has attracted great attention in the NLP community. ChatGPT was created by training the GPT-3.5 series model through "Reinforcement Learning with Human Feedback (RLHF)". RLHF mainly consists of three steps: training a language model using supervised learning; collecting comparative data and training a reward model based on human preferences; and using reinforcement learning to optimize the language model for the reward model. With RLHF training, ChatGPT was observed to have impressive capabilities in various aspects, including generating high-quality responses to human input, rejecting inappropriate questions, and self-correcting previous errors based on subsequent conversations.

Although ChatGPT shows strong conversational capabilities, the NLP community is still unclear whether ChatGPT achieves better zero-shot generalization capabilities compared to existing LLMs. To fill this research gap, researchers systematically studied ChatGPT's zero-shot learning capabilities by evaluating it on a large number of NLP datasets covering 7 representative task categories. These tasks include reasoning, natural language inference, question answering (reading comprehension), dialogue, summarization, named entity recognition, and sentiment analysis. With the help of extensive experiments, the researchers aimed to answer the following questions:

  • #Is ChatGPT a general-purpose solver for NLP tasks? What types of tasks does ChatGPT perform well on?
  • If ChatGPT lags behind other models on some tasks, why?

To answer these questions, the authors compared the performance of ChatGPT and the state-of-the-art GPT-3.5 model (text-davinci-003) based on experimental results. Additionally, they report zero-shot, fine-tuning, or few-shot fine-tuning results of recent works such as FLAN, T0, and PaLM.

Main conclusions

The authors stated that as far as they know, this is the first time anyone has commented on ChatGPT. The zero-shot capabilities on various NLP tasks are studied, aiming to provide a preliminary overview of ChatGPT. Their main findings are as follows:

  • Although ChatGPT as a generalist model shows some ability to perform multiple tasks, it generally performs worse than models fine-tuned for a given task (see Figure 1 and Section 4.3) .
  • ChatGPT’s excellent reasoning ability has been experimentally confirmed in arithmetic reasoning tasks (Section 4.2.1). However, ChatGPT generally performs worse than GPT-3.5 on common sense, symbolic, and logical reasoning tasks, as can be seen, for example, by generating uncertain responses (Section 4.2.2).
  • ChatGPT outperforms GPT-3.5 in natural language inference tasks (Section 4.2.3) and question-answering (reading comprehension) tasks (Section 4.2.4) that favor reasoning ability , such as determining logical relationships among text pairs. Specifically, ChatGPT is better at processing text that is consistent with facts (i.e., better at classifying implications than non-implications).
  • ChatGPT outperforms GPT-3.5 on conversational tasks (Section 4.2.5).
  • In terms of summary tasks, ChatGPT generates longer summaries and performs worse than GPT-3.5. However, explicitly limiting digest length in zero-shot directives compromises digest quality, resulting in reduced performance (Section 4.2.6).
  • Despite showing promise as generalist models, both ChatGPT and GPT-3.5 face challenges on certain tasks, such as sequence annotation (Section 4.2.7).
  • ChatGPT’s sentiment analysis capabilities are close to GPT-3.5 (Section 4.2.8).

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

Method

As mentioned previously, this study This paper mainly compares the zero-shot learning performance of ChatGPT and GPT-3.5 (textdavinci-003) under different tasks. Specifically, they take task instructions P and test questions X as input, the model is represented by f, and then generate target text Y = f (P, X) to solve the test questions. The instructions and input formats for different tasks are shown in Figures 2 and 3.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

Contains six tasks (sentiment analysis, natural language reasoning, named entity recognition, question and answer , dialogue, and summary) commands and input formats. Instructions are in blue font.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

##Inference task description.

For example, when the model performs a sentiment analysis task, the task instruction P marks the sentiment contained in the text as positive or negative, and the output answer is Positive or negative. When the model reads the instruction P and the input content X (the content is a stunning lyrical work of considerable power and authenticity), the model is judged to be expected to output Y positive.

Different from the single-stage prompting method mentioned above, this study uses two-stage prompting (proposed by Kojima et al.) to complete zero-shot-CoT.

The first stage adopts “Let’s think step by step”, and the instruction P_1 induces the basic principle of model generation R.

The second stage uses the basic principle R generated in the first step as well as the original input X and instruction P_1 as new inputs to guide the model to generate the final answer.

After that, a new instruction P_2 is used as the trigger statement to extract the answer. All task instructions were taken from or inspired by the research of Brown, Ouyang, Zhang, et al. One last thing to note is that every time you make a new query to ChatGPT, you need to clear the conversation ahead of time to avoid the impact of the previous example.

Experiment

The experiment uses 20 different data sets to evaluate ChatGPT and GPT-3.5, covering 7 types of tasks.

Arithmetic Reasoning

The accuracy of ChatGPT and GPT-3.5 without or with CoT on six arithmetic reasoning datasets is shown in Table 2. In experiments without CoT, ChatGPT outperformed GPT-3.5 on 5 of the datasets, demonstrating its strong arithmetic reasoning capabilities.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

Figure 4 shows the case where GPT-3.5 gives the wrong answer. On the left side of the picture, ask "Wendy is playing a video game and has 43 lives. During the hard part of the game, she lost 8 lives. If she gets 39 more lives on the next level, how many lives will she have ?" ChatGPT gave the correct answer. However, GPT-3.5 generated a wrong answer. It can be seen that ChatGPT performs much better than GPT-3.5 when using CoT.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

##Common sense, symbols and logical reasoning

Table 3 reports the accuracy of ChatGPT and popular LLM on common sense, symbolic and logical reasoning data sets. The following observations can be made: First, using CoT may not always provide better performance in common sense reasoning tasks, which may require more fine-grained background knowledge. Secondly, unlike arithmetic reasoning, ChatGPT performs worse than GPT-3.5 in many cases, indicating that GPT-3.5 has stronger corresponding capabilities.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

To analyze the reasons, the study shows several failure cases of ChatGPT in Figure 5. We can observe that ChatGPT can easily produce undefined responses, leading to poor performance.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

##Natural Language Reasoning

Table 4 shows the results of different models on two natural language reasoning tasks: RTE and CB. We can see that under zero-shot settings, ChatGPT can achieve better performance than GPT-3.5, FLAN, T0 and PaLM. This proves that ChatGPT has better zero-shot performance in NLP reasoning tasks.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

##Q&A

Table 6 reports the accuracy of different models on the BoolQ data set. ChatGPT is better than GPT-3.5. This shows that ChatGPT can handle reasoning tasks better.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

##Dialogue

Table 8 shows the accuracy of ChatGPT and GPT-3.5 on the MuTual data set (multi-round conversation reasoning). As expected, ChatGPT significantly outperforms GPT-3.5.

Figure 6 is a specific example, we can see that ChatGPT can reason more effectively for a given context. This once again confirms ChatGPT’s super reasoning capabilities.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

##Generate summary

Table 9 reports the ROUGE scores of ChatGPT and GPT-3.5 on the SAMSum dataset. Surprisingly, ChatGPT is inferior to GPT-3.5 on all metrics.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

##Named entity recognition

Table 10 reports the zero-shot performance of ChatGPT and GPT-3.5 on CoNLL03. We can see that the overall performance of ChatGPT and GPT-3.5 is very similar.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

# Sentiment Analysis

Table 11 compares the accuracy of different models on the sentiment analysis data set SST2. Surprisingly, ChatGPT performs about 1% worse than GPT-3.5.

Is ChatGPT really a generalist? Yang Di and others gave it a thorough test.

For more information, please refer to the original paper.

The above is the detailed content of Is ChatGPT really a "generalist"? Yang Di and others gave it a thorough test.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete