search
HomeTechnology peripheralsAIIt only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyi's team is so efficient.

Human education methods are also suitable for large models.

When raising children, people throughout the ages have talked about an important method: leading by example. That is to say, let yourself be an example for children to imitate and learn from, rather than simply telling them what to do. When training a large language model (LLM), we may also be able to use this method - demonstrate to the model.

Recently, Yang Diyi's team at Stanford University proposed a new framework DITTO that can align LLM with specific settings through a small number of demonstrations (examples of desired behavior provided by users). These examples can be obtained from the user's existing interaction logs, or by directly editing the output of LLM. This allows the model to efficiently understand and align user preferences for different users and tasks.

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

  • Paper title: Show, Don't Tell: Aligning Language Models with Demonstrated Feedback
  • Paper address: https://arxiv.org/pdf/2406.00888
DITTO can be based on A small number of demos (fewer than 10) automatically creates a data set containing a large number of preference comparisons (a process called scaffolding) by tacitly recognizing that users prefer the LLM over the output of the original LLM and earlier iterations. Demo. Then, the demonstration and model output are combined into data pairs to obtain an enhanced data set. The language model can then be updated using alignment algorithms such as DPO.

Additionally, the team also discovered that DITTO can be viewed as an online imitation learning algorithm, where data sampled from LLM is used to distinguish expert behavior. From this perspective, the team demonstrated that DITTO can achieve expert-superior performance through extrapolation.
The team also verified the effect of DITTO through experiments.

DITTO Framework

To align LLM, previous methods often require the use of thousands of pairs of comparison data, while DITTO can modify the behavior of the model with only a few demonstrations. This low-cost, rapid adaptation was made possible primarily by the team’s core insight: Online comparison data is easily available via demonstrations.

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

Symbols and background
The language model can be viewed as a policy π(y|x), which results in a distribution of prompt x and completion result y. The goal of RLHF is to train an LLM to maximize a reward function r (x, y) that evaluates the quality of the prompt-completion result pair (x, y). Typically, a KL divergence is also added to prevent the updated model from deviating too far from the base language model (π_ref). Overall, the optimization goals of the RLHF method are:

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

This is to maximize the expected reward on the prompt distribution p, which is affected by the KL constraint regulated by α. Typically, the optimization goal uses a comparison data set of the form {(x, y^w, y^l )}, where the "winning" completion result y^w is better than the "losing" completion result y ^l, recorded as y^w ⪰ y^l.
In addition, here we mark the small expert demonstration data set as D_E, and assume that these demonstrations are generated by the expert policy π_E, which can maximize the prediction reward. DITTO can directly use language model output and expert demonstrations to generate comparison data. That is, unlike generative paradigms for synthetic data, DITTO does not require a model that already performs well on a given task.

Key Idea
DITTO’s key insight is that the language model itself, coupled with expert demonstrations, can lead to a comparative data set for alignment, which eliminates the need to collect large amounts of pairwise preference data. This results in a contrast-like target where expert demonstrations are positive examples.
Generate comparison. Suppose we sample a completion result y^E ∼ π_E (・|x) from the expert policy. Then it can be considered that the rewards corresponding to samples sampled from other policies π are lower than or equal to the rewards of samples sampled from π_E. Based on this observation, the team constructed comparative data (x, y^E, y^π ), where y^E ⪰ y^π. Although such comparative data are derived from strategies rather than individual samples, previous research has demonstrated the effectiveness of this approach. A natural approach for DITTO is to use this data set and a readily available RLHF algorithm to optimize (1). Doing so improves the probability of expert responses while reducing the probability of the current model sample, unlike standard fine-tuning methods which only do the former. The key is that by using samples from π, an unbounded preference data set can be constructed with a small number of demonstrations. However, the team found that it could be done even better by taking into account the temporal aspects of the learning process.
From comparison to ranking. Using only comparative data from experts and a single policy π may not be sufficient to obtain good performance. Doing so will only reduce the likelihood of a particular π, leading to the overfitting problem - which also plagues SFT with little data. The team proposes that it is also possible to consider data generated by all policies learned over time during RLHF, similar to replay in reinforcement learning.
Let the initial strategy in the first round of iteration be π_0. A data set D_0 is obtained by sampling this strategy. A comparison data set for RLHF can then be generated based on this, which can be denoted as D_E ⪰ D_0. Using these derived comparison data, π_0 can be updated to get π_1. By definition, It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient. also holds. After that, continue using π_1 to generate comparison data, and D_E ⪰ D_1. Continue this process, continually generating increasingly diverse comparison data using all of the previous strategies. The team calls these comparisons "replay comparisons."

Although this method makes sense in theory, overfitting may occur if D_E is small. However, comparisons between policies can also be considered during training if it is assumed that the policy will improve after each iteration. Unlike comparison with experts, we cannot guarantee that the strategy will be better after each iteration, but the team found that the overall model is still improving after each iteration. This may be because both reward modeling and (1) It's convex. In this way, the comparison data can be sampled according to the following ranking:

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

By adding these "inter-model" and "replay" comparison data, the effect obtained is that the likelihood of early samples (such as the samples in D_1) will be higher than Later ones (as in D_t) press lower, thus smoothing the implicit reward picture. In practical implementation, the team's approach is to not only use comparison data with experts, but also aggregate some comparison data between these models.
A practical algorithm. In practice, the DITTO algorithm is an iterative process consisting of three simple components, as shown in Algorithm 1.

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

First, run supervised fine-tuning on the expert demo set, performing a limited number of gradient steps. Let this be the initial policy π_0. Second step, sample comparison data: during training, for each of the N demonstrations in D_E, a new dataset D_t is constructed by sampling M completion results from π_t, They are then added to the ranking according to strategy (2). When sampling comparison data from equation (2), each batch B consists of 70% "online" comparison data D_E ⪰ D_t and 20% "replay" comparison data D_E ⪰ D_{i

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

where σ is the logistic function from the Bradley-Terry preference model. During each update, the reference model from the SFT strategy is not updated to avoid deviating too far from the initialization.
Derivating DITTO into Online Imitation Learning
DITTO can be derived from an online imitation learning perspective, where a combination of expert demonstrations and online data are used to learn the reward function and policy simultaneously. Specifically, the strategy player maximizes the expected reward? (π, r), while the reward player minimizes the loss min_r L (D^π, r) on the online data set D^π. More specifically, the The team’s approach is to use the policy objective in (1) and the standard reward modeling loss to instantiate the optimization problem:

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

Derivating DITTO, the first step in simplifying (3) is to solve its internal policy maximum issues. Fortunately, the team found based on previous research that the policy objective ?_KL has a closed-form solution of the form where Z (x) is the partition function of the normalized distribution. Notably, this creates a bijective relationship between the policy and the reward function, which can be used to eliminate internal optimizations. By rearranging this solution, the reward function can be written as: It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

In addition, previous research has shown that this reparameterization can represent arbitrary reward functions. Therefore, by substituting into equation (3), the variable r can be changed into π, thereby obtaining the DITTO objective: It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.
Please note that similar to DPO, the reward function is estimated implicitly here. The difference from DPO is that DITTO relies on an online preference data set D^π.
Why is DITTO better than just using SFT?
One reason why DITTO performs better is that it uses much more data than SFT by generating comparison data. Another reason is that in some cases online imitation learning methods outperform presenters, whereas SFT can only imitate demonstrations.
Experimental results
The team also conducted empirical research to prove the effectiveness of DITTO. Please refer to the original paper for the specific settings of the experiment. We only focus on the experimental results here.
Research results based on static benchmarks
The evaluation of static benchmarks used GPT-4, and the results are shown in Table 1.

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

Secara purata, DITTO mengatasi semua kaedah lain: 71.67% kadar kemenangan purata pada CMCC, 82.50% kadar kemenangan purata pada CCAT50; 77.09% kadar kemenangan purata keseluruhan. Pada CCAT50, untuk semua pengarang, DITTO tidak mencapai kemenangan keseluruhan dalam hanya satu daripada mereka. Pada CMCC, untuk semua pengarang, DITTO mengatasi separuh daripada penanda aras di seluruh papan, diikuti dengan beberapa pukulan yang mendorong kemenangan sebanyak 30%. Walaupun SFT menunjukkan prestasi yang baik, DITTO meningkatkan purata kadar kemenangannya sebanyak 11.7% berbanding dengannya.
Kajian pengguna: Menguji keupayaan untuk membuat generalisasi kepada tugasan semula jadi
Secara keseluruhan, hasil kajian pengguna adalah konsisten dengan keputusan pada penanda aras statik. DITTO mengatasi kaedah yang berbeza dari segi keutamaan untuk demo sejajar, seperti yang ditunjukkan dalam Jadual 2: di mana DITTO (kadar kemenangan 72.1%) > SFT (60.1%) > beberapa pukulan (48.1%) > segera kendiri (44.2%) > pukulan sifar (25.0%).

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

Bilakah DITTO berguna?
Sebelum menggunakan DITTO, pengguna mesti mempertimbangkan beberapa prasyarat, daripada berapa banyak demo yang mereka ada kepada berapa banyak contoh negatif mesti diambil sampel daripada model bahasa. Pasukan itu meneroka kesan keputusan ini dan memberi tumpuan kepada CMCC kerana ia merangkumi lebih banyak misi daripada CCAT. Selain itu, mereka menganalisis kecekapan sampel demonstrasi berbanding maklum balas berpasangan.
Algoritma Gangguan
Pasukan menjalankan kajian ablasi ke atas komponen DITTO.
Seperti yang ditunjukkan dalam Rajah 2 (kiri), meningkatkan bilangan lelaran DITTO biasanya boleh meningkatkan prestasi.

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

Dapat dilihat apabila bilangan lelaran dinaikkan daripada 1 kepada 4, kadar kemenangan yang dinilai oleh GPT-4 akan meningkat sebanyak 31.5%. Peningkatan ini tidak monotonik - pada lelaran 2, prestasi menurun sedikit (-3.4%). Ini kerana lelaran awal mungkin berakhir dengan sampel yang lebih bising, sekali gus mengurangkan prestasi. Sebaliknya, seperti yang ditunjukkan dalam Rajah 2 (tengah), meningkatkan bilangan contoh negatif secara monotonik meningkatkan prestasi DITTO. Tambahan pula, apabila lebih banyak contoh negatif dijadikan sampel, varians prestasi DITTO berkurangan.

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

Selain itu, seperti yang ditunjukkan dalam Jadual 3, kajian ablasi pada DITTO mendapati bahawa mengalih keluar mana-mana komponennya mengakibatkan kemerosotan prestasi.

Sebagai contoh, jika anda melepaskan pensampelan berulang dalam talian, berbanding menggunakan DITTO, kadar kemenangan akan turun daripada 70.1% kepada 57.3%. Dan jika π_ref dikemas kini secara berterusan semasa proses dalam talian, ia akan menyebabkan penurunan prestasi yang ketara: daripada 70.1% kepada 45.8%. Pasukan itu membuat spekulasi bahawa sebabnya ialah pengemaskinian π_ref boleh menyebabkan overfitting. Akhir sekali, kita juga boleh melihat dalam Jadual 3 kepentingan data perbandingan ulang dan antara strategi.
Kecekapan Sampel
Salah satu kelebihan utama DITTO ialah kecekapan sampelnya. Pasukan menilai ini dan keputusan ditunjukkan dalam Rajah 2 (kanan sekali lagi, kadar kemenangan yang dinormalkan dilaporkan di sini);
Pertama sekali, anda dapat melihat bahawa kadar kemenangan DITTO akan meningkat dengan cepat pada permulaannya. Apabila bilangan tunjuk cara berubah dari 1 hingga 3, prestasi yang dinormalisasi meningkat dengan ketara dengan setiap peningkatan (0% → 5% → 11.9%).
Namun, apabila bilangan demo terus meningkat, peningkatan hasil berkurangan (11.9% → 15.39% apabila meningkat daripada 4 kepada 7), yang menunjukkan bahawa apabila bilangan demo meningkat, prestasi DITTO akan menjadi tepu.
Di samping itu, pasukan membuat spekulasi bahawa bukan sahaja bilangan demonstrasi akan menjejaskan prestasi DITTO, tetapi juga kualiti demonstrasi, tetapi ini ditinggalkan untuk penyelidikan masa depan.
Bagaimanakah keutamaan berpasangan berbanding demo?
Satu andaian teras DITTO ialah kecekapan sampel datang daripada demonstrasi. Secara teori, jika pengguna mempunyai set demonstrasi yang sempurna dalam fikiran, kesan yang sama boleh dicapai dengan menganotasi banyak pasangan data keutamaan.
Pasukan melakukan percubaan rapat, menggunakan sampel output daripada Compliance Mistral 7B, dan mempunyai 500 pasang data keutamaan yang turut dianotasi oleh salah seorang pengarang yang menyediakan demo kajian pengguna.
Ringkasnya, mereka membina set data keutamaan berpasangan D_pref = {(x, y^i , y^j )}, dengan y^i ≻ y^j. Mereka kemudiannya mengira kadar kemenangan untuk 20 pasangan hasil sampel daripada dua model - satu dilatih pada 4 tunjuk cara menggunakan DITTO, dan satu lagi dilatih pada pasangan data pilihan {0...500} hanya menggunakan DPO .

It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyis team is so efficient.

Apabila mensampel data keutamaan berpasangan hanya daripada π_ref, dapat diperhatikan bahawa pasangan data yang dijana terletak di luar taburan yang ditunjukkan - keutamaan berpasangan tidak melibatkan tingkah laku yang ditunjukkan oleh pengguna (hasil untuk dasar Asas dalam Rajah 3, warna biru). Walaupun apabila mereka memperhalusi π_ref menggunakan demonstrasi pengguna, ia masih memerlukan lebih daripada 500 pasang data keutamaan untuk memadankan prestasi DITTO (hasil untuk dasar diperhalusi Demo dalam Rajah 3, oren).

The above is the detailed content of It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyi's team is so efficient.. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool