


A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking
When I woke up, the machine learning community was in a state of shock.
Because the latest research has found that just saying "Let's think step by step" to GPT-3 will allow it to correctly answer questions that it could not answer before.
For example, the following example:
Half of the 16 balls are golf balls, and half of these golf balls are blue. How many blue golf balls are there in total?
(The problem is not difficult, but please note that this is zero-sample learning, which means that similar problems have never been seen during the AI training stage.)
If GPT is required -3 Directly write "what is the answer", it will give the wrong answer: 8.
But after adding the "spell" that lets us think about it step by step, GPT-3 will first output the steps of thinking, and finally give the correct answer: 4!
And this does not It's not a coincidence, the research team fully verified it in the paper.
The above question comes from the classic MutiArith data set, which specifically tests the language model's ability to solve mathematical problems. GPT-3 originally had an accuracy of only 17% in a zero-sample scenario.
This paper summarizes the 9 most effective prompt words. Among them, the first 6 words that were changed to let GPT-3 think gradually increased the accuracy rate to more than 70%.
Even the simplest “Let’s think” (let’s think about it) can rise to 57.5%.
It feels like a kindergarten aunt is coaxing a child...
This technique does not seem to require any magic modifications to GPT-3. Someone has successfully reproduced it on the OpenAI official demo. Even changing it to Chinese will work.
English questions have Chinese hints, and GPT-3 gives the correct Chinese answers.
The Google researcher who first forwarded this paper to the social network said that the new all you need has been added.
Seeing this, the big guys from all walks of life started to have their imaginations wild and started making jokes.
What would happen if you encouraged the AI to say "You can do it, I believe in you"?
Threaten the AI by saying "Time is running out" or "You What about "having a gun on your head"?
Will saying "drive more carefully" to the AI become a self-driving solution?
Some people also pointed out that this is almost the same as the plot of the science fiction story "The Hitchhiker's Guide to the Galaxy". The key to achieving general artificial intelligence is knowing how to ask the AI correctly.
So, what is going on with this magical phenomenon?
The large language model was discovered by the zero-sample reasoner
It is a collaborative research between Google Brain and the University of Tokyo, which explores the performance of large language models in zero-sample scenarios.
The title of the paper "Language Model is a Zero-Sample Reasoner" also pays tribute to GPT-3's "Language Model is a Few-Sample Learner".
The method used belongs to Chain of Thought Prompting (CoT), which was just proposed by the Google Brain team in January this year.
The earliest CoT was applied to few-sample learning. While asking questions, a step-by-step answer example was given to guide the AI.
This latest research proposes zero-sample CoT. The main change is to simplify the example part.
- The first step is to rewrite the question stem into the form of "Q: xxx, A: xxx", in which the trigger sentence A can extract the thinking process of the language model.
- The second step is an additional experiment, adding the prompt "The answer is..." to prompt the language model to give the final answer.
#The biggest advantage of this is that it is universal, and there is no need to provide dedicated examples for different problem types.
The paper has done sufficient experiments on various problems, including 12 tests:
- 6 mathematical problem test sets, SingleEq, AddSub, SVAMP and the more challenging MultiArith, AQUA-RAT, GSM8K.
- 2 common sense reasoning test sets, CommonsenseQA and StrategyQA.
- 2 symbolic reasoning test sets, Last Letter Concatenation and Coin Flip.
- As well as the date understanding problem in BIG-bench and the task of tracking out-of-order objects.
Compared with ordinary zero-shot learning, zero-shot CoT achieves better results in 10 of them.
The value on the right side of △ is the additional experimental result
In the more difficult MultiArith and GSM8K math tests, the latest version of GPT-3 Text-davinci was used -002 (175B) conducted more in-depth experiments.
If you give 8 attempts to get the best result, the accuracy can be further improved to 93%.
In the analysis of error results, researchers also found that in many questions, the reasoning process of AI is actually correct, but when the answer cannot converge to a unique determination, multiple answers will be given. Alternative.
At the end of the paper, the research team proposed that this study can not only serve as the baseline for zero-sample CoT, but also hopes to make the academic community realize the importance of constructing fine-tuned data sets and few-sample prompt templates. Previously, we fully explored the importance of zero-sample capabilities of large language models.
The research team comes from the Matsuo Laboratory of the University of Tokyo.
The person in charge, Professor Matsuo Yutaka, is also the first artificial intelligence expert on SoftBank’s board of directors.
Visiting professor Gu Shixiang among the team members is from the Google Brain team. Gu Shixiang studied under Hinton, one of the three giants, for his undergraduate degree and graduated with a doctorate from the University of Cambridge.
Adding a little "magic" has become a new trend in the AI circle
Why zero-sample CoT works remains to be explored.
However, someone experimentally concluded that this method seems to be only effective for GPT-3 (text-davinci-002). He tried version 001 and found little effect.
He listed an example of what he did.
Question: Please connect the last letters of each word in machine and learning.
The answer given by GPT-3 when prompted is to connect all the letters in the two words.
In response, one of the authors, Gu Shixiang, replied that in fact, the "spell" has an effect on both the initial version and the improved version of GPT-3, and these results are also reflected in the paper. .
Some people have also questioned whether deep learning has become a game of finding a "magic spell"?
At the same time, we saw Marcus again in the Tucao team.
He also listed an example of failure. GPT-3, with the blessing of the "spell", failed to figure out whether Sally's cow would come back to life...
However, it is worth noting that it is not uncommon for examples like this to add a little magic to the AI, and the improvement effect is immediate.
Some netizens shared that adding a few intermediate commands when using GPT-3 can indeed get more satisfactory results.
Previously, researchers from Google and MIT found that without changing the underlying architecture, as long as the training language model will "break points" like programmers when debugging, the model reads the code, My ability to do arithmetic improved quickly.
The principle is also very simple, that is, in a program with many calculation steps, let the model encode each step into text and record them in a file called "Sticky Notes" ” in the temporary register.
As a result, the calculation process of the model becomes clearer and more orderly, and the performance is naturally greatly improved.
There is also Instruct GPT-3 used for testing in this experiment, which is also a typical example.
Just by letting GPT-3 learn intensively from human feedback, it can significantly improve the situation of answering incorrect questions.
Specifically, we first use some human demonstration answers to fine-tune the model, then collect several sets of different output data for a certain question, manually sort the several sets of answers, and train the reward model on this data set.
Finally, using RM as the reward function, the Proximal Policy Optimization (PPO) algorithm fine-tunes the GPT-3 policy to maximize rewards with reinforcement learning methods.
Including Aran, the Twitter blogger who ignited this topic, was the one who originally discovered that adding "Unreal Engine" can make the quality of AI-generated images soar.
Former Google robot boss Eric Jang also previously discovered that reinforcement learning can also use similar thinking to improve computing efficiency.
Some people also said that this kind of technique used in AI is not what they usually use when using their brains?
In fact, Bengio has previously started from brain science and proposed that the operating mode of AI should be like the human brain mode.
Human cognitive tasks can be divided into system 1 cognition and system 2 cognition.
System 1 cognitive tasks refer to those tasks that are completed unconsciously. For example, you can immediately identify what you are holding in your hand, but you cannot explain to others how you completed this process.
System 2 cognitive tasks refer to cognitions that the human brain needs to complete according to certain steps. For example, if you do an addition and subtraction calculation, you can clearly explain how you arrived at the final answer.
The "spell" added this time is to allow AI to go one step further and learn to think in steps.
Faced with this trend, some scholars believe that "hint engineering is replacing feature engineering."
So "cue word hunter" will become the nickname of the next generation of NLP researchers?
Paper address :https://www.php.cn/link/cc9109aa1f048c36d154d902612982e2
Reference link:
[1]https: //twitter.com/arankomatsuzaki/status/1529278580189908993
[2]https://evjang.com/2021/10/23/generalization.html
The above is the detailed content of A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking. For more information, please follow other related articles on the PHP Chinese website!
![Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]](https://img.php.cn/upload/article/001/242/473/174717025174979.jpg?x-oss-process=image/resize,p_40)
ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

ChatGPT App: Unleash your creativity with the AI assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

WebStorm Mac version
Useful JavaScript development tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

Dreamweaver Mac version
Visual web development tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.
