search
HomeTechnology peripheralsAIDoes fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Facing the current common practice of fine-tuning large models mainly relying on human-generated data, Google DeepMind has explored a more efficient way to reduce this dependence.


As you and I can see, Large Language Models (LLMs) are changing the deep learning landscape, demonstrating superior capabilities in generating human-quality text and solving various language tasks. While the industry has further improved performance on specific tasks through supervised fine-tuning of human-collected data, obtaining high-quality human data faces significant bottlenecks. This is especially true for tasks that involve solving complex problems, requiring significant resources and expertise.

How to solve it? Synthetic data generated by models is a promising alternative that can be scalable and cost-effective as long as the quality of the data is maintained.

While LLM is able to self-evaluate the generated data, in this paper, Google DeepMind explores a simpler setup that uses an external scalar feedback signal as a quality indicator for each generated sample.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Paper address: https://arxiv.org/pdf/2312.06585.pdf

In order to study training on model-generated data, the researchers considered a simple but powerful language model self-training The method requires only two functions, one is to generate samples based on the model, and the other is to use the scoring mechanism to evaluate these samples.

In order to ensure clarity and consistency, the researchers adopted a reinforcement self-training method ReST^??, and proved that this method can use expectation-maximization (EM) for reinforcement learning. Specifically, ReST^?? alternates between expectation and maximization steps.

  1. Generation (E-step): The language model generates multiple output samples for each input context, and then filters these samples using binary rewards to collect a training dataset.
  2. Improvement (M-step): The original language model is supervised fine-tuned on the training data set from the previous E-step and then used in the next E-step.

Researchers confirmed that ReST^?? and its variants have been successful in enhancing language models in various fields, including machine translation, semantic analysis, preference alignment and basic reasoning.

In addition, previous work mainly used ReST^??for relatively small models (up to 7 billion parameters), with limited scalability for larger models. Therefore, this paper aims to explore the effectiveness and scalability of model-generated synthetic data versus human-generated data in two challenging but less studied areas: Mathematical Problem Solving at Competitive Levels (MATH) and code generation (APPS).

Empirical results show that when using ReST^?? for PaLM 2 models of different sizes, significant performance improvements are achieved in mathematical reasoning and code generation tasks. Models fine-tuned on synthetic data generated by the model achieved greater performance gains than models trained on human-written data. Interestingly, performance degrades beyond a certain number of ReST^?? iterations, indicating the potential for overfitting on a small number of training problems.

In addition, the model fine-tuned using ReST^?? improved pass@k metric and majority voting performance. These fine-tuned models also show performance enhancements on relevant but held-out benchmarks, including math (GSM8K and Hungarian HS finals), coding (HumanEval), and Big-Bench Hard tasks.

In summary, the results of this paper show that self-training with feedback is a promising method to reduce reliance on human data.

Expected Maximum (EM) for reinforcement self-training

First, this study is based on the previous research of Dayan and Hinton, using a language model to describe the EM-based reinforcement learning framework. Specifically, they first defined a binary optimal variable O such that ?(?= 1|?,?)∝?(?(?,?)); then for the non-decreasing function ?: ℝ → ℝ+, they achieved Maximizing observation?= 1 (obtaining high reward), the following formula is obtained:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

However, solving the sum of the sequence ? in the above equation is tricky. Therefore, this paper considers maximizing its ELBO ?( ??, ?) with respect to the parameter ? and the variational distribution ?( ?|?) instead of maximizing log ?(? = 1; ?). Specifically:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The EM algorithm in formula (2) alternates between E-step (Expectation) and M-step (Maximization).

ReST^??: Inspired by the EM framework, the next paper discusses a simplified version of the ReST method proposed by Gulcehre et al. For clarity, this article calls this approach ReST^??, which decouples data collection (E-step) and policy optimization (M-step) in the RL pipeline. As shown in Algorithm 1:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Generation (E-step) : In this step, the study generates the dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better by sampling the output sequence from the current policy ?? Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. Here, the input is resampled from the original dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. The output sequence in Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better is then scored using the binary reward function ?(?, ?).

Improvement (M-step) : In the ? iteration, the study uses the new dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better in the E-step to fine-tune the strategy ??. Unlike Gulcehre's study, they fine-tune a base pre-trained language model to minimize task-specific overfitting and minimize deviations from the base model. For fine-tuning, the study minimizes the reward-weighted negative log-likelihood loss Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. Once the strategy is improved, a new dataset with better quality samples can be created again.

Experiments and Analysis

The main goal of conducting experiments in this paper is to answer the following questions:

  1. How effective is ReST^?? compared to fine-tuning on human-generated data? ?
  2. How many iterations are needed to get the best performance? ReST^??How long does it take to overfit the training set?
  3. ReST^??How does it affect pass@k and majority voting performance?
  4. If a user uses the data generated by the model for fine-tuning on a specific task, will it be migrated to other tasks? When evaluating our fine-tuned model on a wide range of tasks, does the performance degrade compared to the base model?
  5. Approximately how much input data is needed to get most of the performance gains from ReST^??? Is one iteration of ReST^?? enough?

This study conducted experiments using the PaLM 2 model and public APIs on Google Cloud, including PaLM 2-S (Bison), PaLM 2-S* (Codey), and PaLM 2-L (Unicorn). The training data set uses the MATH data set and APPS data set.

Figure 2 and Figure 3 show the performance of ReST^?? trained on the MATH and APPS datasets respectively. It can be concluded that MATH benefits from multiple iterations of ReST^??, both in terms of performance on the MATH test set and migration to GSM8K. On the other hand it can be seen that most of the gains for APPS come from the first iteration, while performing more iterations results in performance degradation for both APPS and HumanEval.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The gap between training and test performance. Figure 4 shows that while the training set performance increases linearly with the number of ReST^?? iterations, the test set performance does not. For MATH, little improvement in test performance was observed after the first iteration, whereas for APPS, performance regression was observed in the second iteration. The study speculates that the regression in performance may be due to overfitting. Since the APPS dataset is about one-third the size of the MATH dataset, it is more susceptible to this problem.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Figure 5 shows the performance of the Palm-2-L model on the pass@K metric. The results show that the ReST^?? model obtained after fine-tuning is stronger for all values ​​of K, with the performance gap generally being largest at K=1. Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The above is the detailed content of Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.