


Microsoft's latest research once again proves the power of Prompt Project -
Without additional fine-tuning or expert planning, GPT-4 can become an "expert" with just prompts.
Using the latest prompting strategy they proposed Medprompt, in the medical professional field, GPT-4 achieved the best results in the nine test sets of MultiMed QA.
On the MedQA data set (United States Medical Licensing Examination questions), Medprompt made GPT-4's accuracy exceed 90% for the first time, surpassed BioGPT and Med-PaLM Waiting for a number of fine-tuning methods.
The researchers also stated that the Medprompt method is universal and is not only applicable to medicine, but can also be extended to electrical engineering, machine learning, law and other majors.
As soon as this study was shared on X (formerly Twitter), it attracted the attention of many netizens.
Wharton School professor Ethan Mollick, Artificial Intuition author Carlos E. Perez, etc. have all forwarded and shared it.
Carlos E. Perez said that "an excellent prompting strategy can take a lot of fine-tuning":
Some netizens said that they have had this premonition for a long time. , it’s really cool to see the results coming out now!
Some netizens think this is really "radical"
GPT-4 is a technology that can change the industry, but we are still far away The limits of the prompts have not been hit, nor have the limits of fine tuning been reached.
Combined prompt strategies, "transform" into an expert
Medprompt is a combination of multiple prompt strategies, including three magic weapons:
- Dynamic few-shot selection
- Self-generated chain of thought
- Choice shuffling ensemble )
Next, we will introduce them one by one
Dynamic few-sample selection
Few-sample learning is to make the model fast An effective way to learn context. Simply put, input some examples, let the model quickly adapt to a specific domain, and learn to follow the format of the task.
This kind of few-sample examples used for specific task prompts are usually fixed, so there are high requirements for the representativeness and breadth of the examples.
The previous method was to let domain expertsmanually produce examples, but even so, there is no guarantee that the fixed few-sample examples curated by experts are representative in each task.
Microsoft researchers proposed a method of dynamic few-shot examples, so
The idea is that the task training set can be used as a source of few-shot examples, and if the training set is large enough, then it can Select different few-shot examples for different task inputs.
In terms of specific operations, the researchers first used the text-embedding-ada-002 model to generate vector representations for each training sample and test sample. Then, for each test sample, by comparing the similarity of the vectors, the k most similar samples are selected from the training samples
Compared with the fine-tuning method, dynamic few-shot selection makes use of the training data, But it doesn't require extensive updates to model parameters.
Self-generated chain of thinking
The chain of thinking (CoT) method is a method that lets the model think step by step and generate a series of intermediate reasoning steps
Previous methods relied on experts Manually write some examples with prompt thought chains
Here, the researchers found that GPT-4 can be simply asked to generate thought chains for training examples using the following prompt:
But the researchers also pointed out that this automatically generated thinking chain may contain wrong reasoning steps, so they set up a verification tag as a filter, which can effectively reduce errors.
Compared with the thinking chain examples hand-crafted by experts in the Med-PaLM 2 model, the basic principles of the thinking chain generated by GPT-4 are longer, and the step-by-step reasoning logic is more fine-grained.
Option Shuffling Integration
GPT-4 may have a bias when dealing with multiple choice questions, that is, it tends to always choose A or always choose B, no matter what the content of the option is. , this is the position deviation
In order to solve this problem, the researchers decided to rearrange the order of the original options to reduce the impact. For example, the original order of options is ABCD, which can be changed to BCDA, CDAB, etc.
Then let GPT-4 do multiple rounds of predictions, using a different order of options in each round. This "forces" GPT-4 to consider the content of the options.
Finally, vote on the results of multiple rounds of predictions and choose the most consistent and correct option.
The combination of the above prompt strategies is Medprompt. Let’s take a look at the test results.
Multiple Test Optimal
In the test, the researchers used the MultiMed QA evaluation benchmark.
GPT-4, which uses the Medprompt prompting strategy, achieved the highest scores in all nine benchmark data sets of MultiMedQA, better than Flan-PaLM 540B and Med-PaLM 2.
In addition, the researchers also discussed the performance of the Medprompt strategy on "Eyes-Off" data. The so-called "Eyes-Off" data refers to data that the model has never seen during the training or optimization process. It is used to test whether the model is overfitting the training data
Results GPT-4 combined with the Medprompt strategy performed well on multiple medical benchmark data sets, with an average accuracy of 91.3%.
The researchers conducted ablation experiments on the MedQA dataset to explore the relative contributions of the three components to the overall performance
In which thought chains are automatically generated The steps play the biggest role in improving performance
The score of the thinking chain automatically generated by GPT-4 is higher than the score planned by experts in Med-PaLM 2, and does not require Manual intervention
#Finally, the researchers also explored Medprompt’s cross-domain generalization capabilities, using six different datasets from the MMLU benchmark, covering electrical engineering , machine learning, philosophy, professional accounting, professional law and professional psychology issues.
Two additional datasets containing NCLEX (National Nurse Licensing Examination) questions have also been added.
The results show that the effect of Medprompt on these data sets is similar to the improvement on the MultiMedQA medical data set, with the average accuracy increased by 7.3%.
Please click the following link to view the paper: https://arxiv.org/pdf/2311.16452.pdf
The above is the detailed content of Microsoft turned GPT-4 into a medical expert with just the 'Prompt Project'! More than a dozen highly fine-tuned models, the professional test accuracy exceeded 90% for the first time. For more information, please follow other related articles on the PHP Chinese website!

Introduction Suppose there is a farmer who daily observes the progress of crops in several weeks. He looks at the growth rates and begins to ponder about how much more taller his plants could grow in another few weeks. From th

Soft AI — defined as AI systems designed to perform specific, narrow tasks using approximate reasoning, pattern recognition, and flexible decision-making — seeks to mimic human-like thinking by embracing ambiguity. But what does this mean for busine

The answer is clear—just as cloud computing required a shift toward cloud-native security tools, AI demands a new breed of security solutions designed specifically for AI's unique needs. The Rise of Cloud Computing and Security Lessons Learned In th

Entrepreneurs and using AI and Generative AI to make their businesses better. At the same time, it is important to remember generative AI, like all technologies, is an amplifier – making the good great and the mediocre, worse. A rigorous 2024 study o

Unlock the Power of Embedding Models: A Deep Dive into Andrew Ng's New Course Imagine a future where machines understand and respond to your questions with perfect accuracy. This isn't science fiction; thanks to advancements in AI, it's becoming a r

Large Language Models (LLMs) and the Inevitable Problem of Hallucinations You've likely used AI models like ChatGPT, Claude, and Gemini. These are all examples of Large Language Models (LLMs), powerful AI systems trained on massive text datasets to

Recent research has shown that AI Overviews can cause a whopping 15-64% decline in organic traffic, based on industry and search type. This radical change is causing marketers to reconsider their whole strategy regarding digital visibility. The New

A recent report from Elon University’s Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, ‘Being Human in 2035’, concluded that most are concerned that the deepening adoption of AI systems over t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.