Home  >  Article  >  Technology peripherals  >  A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

WBOY
WBOYforward
2023-04-27 17:19:081550browse

When I woke up, the machine learning community was in a state of shock.

Because the latest research has found that just saying "Let's think step by step" to GPT-3 will allow it to correctly answer questions that it could not answer before.

For example, the following example:

Half of the 16 balls are golf balls, and half of these golf balls are blue. How many blue golf balls are there in total?

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

(The problem is not difficult, but please note that this is zero-sample learning, which means that similar problems have never been seen during the AI ​​training stage.)

If GPT is required -3 Directly write "what is the answer", it will give the wrong answer: 8.

But after adding the "spell" that lets us think about it step by step, GPT-3 will first output the steps of thinking, and finally give the correct answer: 4!

And this does not It's not a coincidence, the research team fully verified it in the paper.

The above question comes from the classic MutiArith data set, which specifically tests the language model's ability to solve mathematical problems. GPT-3 originally had an accuracy of only 17% in a zero-sample scenario.

This paper summarizes the 9 most effective prompt words. Among them, the first 6 words that were changed to let GPT-3 think gradually increased the accuracy rate to more than 70%.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Even the simplest “Let’s think” (let’s think about it) can rise to 57.5%.

It feels like a kindergarten aunt is coaxing a child...

This technique does not seem to require any magic modifications to GPT-3. Someone has successfully reproduced it on the OpenAI official demo. Even changing it to Chinese will work.

English questions have Chinese hints, and GPT-3 gives the correct Chinese answers.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

The Google researcher who first forwarded this paper to the social network said that the new all you need has been added.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Seeing this, the big guys from all walks of life started to have their imaginations wild and started making jokes.

What would happen if you encouraged the AI ​​to say "You can do it, I believe in you"?

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Threaten the AI ​​by saying "Time is running out" or "You What about "having a gun on your head"?

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Will saying "drive more carefully" to the AI ​​become a self-driving solution?

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Some people also pointed out that this is almost the same as the plot of the science fiction story "The Hitchhiker's Guide to the Galaxy". The key to achieving general artificial intelligence is knowing how to ask the AI ​​correctly.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

So, what is going on with this magical phenomenon?

The large language model was discovered by the zero-sample reasoner

It is a collaborative research between Google Brain and the University of Tokyo, which explores the performance of large language models in zero-sample scenarios.

The title of the paper "Language Model is a Zero-Sample Reasoner" also pays tribute to GPT-3's "Language Model is a Few-Sample Learner".

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

The method used belongs to Chain of Thought Prompting (CoT), which was just proposed by the Google Brain team in January this year.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

The earliest CoT was applied to few-sample learning. While asking questions, a step-by-step answer example was given to guide the AI.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

This latest research proposes zero-sample CoT. The main change is to simplify the example part.

  • The first step is to rewrite the question stem into the form of "Q: xxx, A: xxx", in which the trigger sentence A can extract the thinking process of the language model.
  • The second step is an additional experiment, adding the prompt "The answer is..." to prompt the language model to give the final answer.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

#The biggest advantage of this is that it is universal, and there is no need to provide dedicated examples for different problem types.

The paper has done sufficient experiments on various problems, including 12 tests:

  • 6 mathematical problem test sets, SingleEq, AddSub, SVAMP and the more challenging MultiArith, AQUA-RAT, GSM8K.
  • 2 common sense reasoning test sets, CommonsenseQA and StrategyQA.
  • 2 symbolic reasoning test sets, Last Letter Concatenation and Coin Flip.
  • As well as the date understanding problem in BIG-bench and the task of tracking out-of-order objects.

Compared with ordinary zero-shot learning, zero-shot CoT achieves better results in 10 of them.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

The value on the right side of △ is the additional experimental result

In the more difficult MultiArith and GSM8K math tests, the latest version of GPT-3 Text-davinci was used -002 (175B) conducted more in-depth experiments.

If you give 8 attempts to get the best result, the accuracy can be further improved to 93%.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

In the analysis of error results, researchers also found that in many questions, the reasoning process of AI is actually correct, but when the answer cannot converge to a unique determination, multiple answers will be given. Alternative.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

At the end of the paper, the research team proposed that this study can not only serve as the baseline for zero-sample CoT, but also hopes to make the academic community realize the importance of constructing fine-tuned data sets and few-sample prompt templates. Previously, we fully explored the importance of zero-sample capabilities of large language models.

The research team comes from the Matsuo Laboratory of the University of Tokyo.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

The person in charge, Professor Matsuo Yutaka, is also the first artificial intelligence expert on SoftBank’s board of directors.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Visiting professor Gu Shixiang among the team members is from the Google Brain team. Gu Shixiang studied under Hinton, one of the three giants, for his undergraduate degree and graduated with a doctorate from the University of Cambridge.

Adding a little "magic" has become a new trend in the AI ​​circle

Why zero-sample CoT works remains to be explored.

However, someone experimentally concluded that this method seems to be only effective for GPT-3 (text-davinci-002). He tried version 001 and found little effect.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

He listed an example of what he did.

Question: Please connect the last letters of each word in machine and learning.

The answer given by GPT-3 when prompted is to connect all the letters in the two words.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

In response, one of the authors, Gu Shixiang, replied that in fact, the "spell" has an effect on both the initial version and the improved version of GPT-3, and these results are also reflected in the paper. .

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Some people have also questioned whether deep learning has become a game of finding a "magic spell"?

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

At the same time, we saw Marcus again in the Tucao team.

He also listed an example of failure. GPT-3, with the blessing of the "spell", failed to figure out whether Sally's cow would come back to life...

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

However, it is worth noting that it is not uncommon for examples like this to add a little magic to the AI, and the improvement effect is immediate.

Some netizens shared that adding a few intermediate commands when using GPT-3 can indeed get more satisfactory results.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Previously, researchers from Google and MIT found that without changing the underlying architecture, as long as the training language model will "break points" like programmers when debugging, the model reads the code, My ability to do arithmetic improved quickly.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

The principle is also very simple, that is, in a program with many calculation steps, let the model encode each step into text and record them in a file called "Sticky Notes" ” in the temporary register.

As a result, the calculation process of the model becomes clearer and more orderly, and the performance is naturally greatly improved.

There is also Instruct GPT-3 used for testing in this experiment, which is also a typical example.

Just by letting GPT-3 learn intensively from human feedback, it can significantly improve the situation of answering incorrect questions.

Specifically, we first use some human demonstration answers to fine-tune the model, then collect several sets of different output data for a certain question, manually sort the several sets of answers, and train the reward model on this data set.

Finally, using RM as the reward function, the Proximal Policy Optimization (PPO) algorithm fine-tunes the GPT-3 policy to maximize rewards with reinforcement learning methods.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Including Aran, the Twitter blogger who ignited this topic, was the one who originally discovered that adding "Unreal Engine" can make the quality of AI-generated images soar.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Former Google robot boss Eric Jang also previously discovered that reinforcement learning can also use similar thinking to improve computing efficiency.

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Some people also said that this kind of technique used in AI is not what they usually use when using their brains?

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

In fact, Bengio has previously started from brain science and proposed that the operating mode of AI should be like the human brain mode.

Human cognitive tasks can be divided into system 1 cognition and system 2 cognition.

System 1 cognitive tasks refer to those tasks that are completed unconsciously. For example, you can immediately identify what you are holding in your hand, but you cannot explain to others how you completed this process.

System 2 cognitive tasks refer to cognitions that the human brain needs to complete according to certain steps. For example, if you do an addition and subtraction calculation, you can clearly explain how you arrived at the final answer.

The "spell" added this time is to allow AI to go one step further and learn to think in steps.

Faced with this trend, some scholars believe that "hint engineering is replacing feature engineering."

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

So "cue word hunter" will become the nickname of the next generation of NLP researchers?

A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking

Paper address :​​https://www.php.cn/link/cc9109aa1f048c36d154d902612982e2​

Reference link:

[1]https: //twitter.com/arankomatsuzaki/status/1529278580189908993

[2]https://evjang.com/2021/10/23/generalization.html

The above is the detailed content of A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete