search
HomeTechnology peripheralsAIAnother revolution in reinforcement learning! DeepMind proposes 'algorithm distillation': an explorable pre-trained reinforcement learning Transformer

In current sequence modeling tasks, Transformer can be said to be the most powerful neural network architecture, and the pre-trained Transformer model can use prompts as conditions or in-context learning to adapt to different situations. Downstream tasks.

The generalization ability of the large-scale pre-trained Transformer model has been verified in multiple fields, such as text completion, language understanding, image generation, etc.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

Since last year, there has been relevant work proving that by treating offline reinforcement learning (offline RL) as a sequence prediction problem, then the model can Can learn policies from offline data.

But current methods either learn a policy from data that does not contain learning (such as an expert policy fixed by distillation), or learn a policy from data that contains learning (such as an agent's heavy buffer), but its context is too small to capture policy improvements.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

DeepMind researchers discovered through observation that, in principle, the sequential nature of learning in reinforcement learning algorithm training can transform reinforcement into The learning process itself is modeled as a "causal sequence prediction problem".

Specifically, if the context of a Transformer is long enough to include policy improvements due to learning updates, then it should not only be able to represent a fixed policy, but also be able to The states, actions, and rewards of previous episodes are represented as a policy improvement operator.

This also provides a technical feasibility that any RL algorithm can be distilled into a sufficiently powerful sequence model through imitation learning and transformed into an in- context RL algorithm.

Based on this, DeepMind proposed Algorithm Distillation (AD), which extracts reinforcement learning algorithms into neural networks by establishing a causal sequence model.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

## Paper link: ​https://arxiv.org/pdf/2210.14215.pdf​

Algorithmic distillation treats learning reinforcement learning as a cross-episode sequence prediction problem, generates a learning history data set through the source RL algorithm, and then uses the learning history as the context to train the causal Transformer through autoregressive prediction behavior .

Unlike post-learning or sequence policy prediction structures for expert sequences, AD is able to improve its policy entirely in context without updating its network parameters.

  • Transfomer collects its own data and maximizes rewards on new tasks;
  • No prompting or fine-tuning required;
  • With weights frozen, the Transformer can explore, exploit, and maximize the return of the context! Expert Distillation methods such as Gato cannot explore and cannot maximize returns.

The experimental results prove that AD can perform reinforcement learning in various environments with sparse rewards, combined task structures, and pixel-based observation, and the data efficiency of AD learning (data- efficient) than the RL algorithm that generated the source data.

AD is also the first to demonstrate in-context reinforcement learning methods through sequence modeling of offline data with imitation loss.

Algorithm Distillation

In 2021, some researchers first discovered that Transformer can learn a single-task policy from offline RL data through imitation learning, and was subsequently extended to be able to Extract multitasking strategies in same-domain and cross-domain settings.

These works propose a promising paradigm for extracting general multi-task policies: first collect a large number of different environmental interaction data sets, and then extract one from the data through sequence modeling Strategy.

The method of learning policies from offline RL data through imitation learning is also called offline policy distillation, or simply Policy Distillation (Policy Distillation, PD).

Although the idea of ​​PD is very simple and easy to extend, PD has a major flaw: the generated strategy does not improve from additional interactions with the environment.

For example, MultiGame Decision Transformer (MGDT) learned a return conditional policy that can play a large number of Atari games, while Gato learned a conditional policy in Strategies for solving tasks in different environments, but neither approach can improve its strategies through trial and error.

MGDT adapts the transformer to new tasks by fine-tuning the weights of the model, while Gato requires expert demonstration tips to adapt to new tasks.

In short, the Policy Distillation method learns policies rather than reinforcement learning algorithms.

The researchers hypothesized that the reason Policy Distillation cannot improve through trial and error is that it is trained on data that does not show learning progress.

Algorithmic distillation (AD) is a method of learning intrinsic policy improvement operators by optimizing the causal sequence prediction loss in the learning history of an RL algorithm.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

##AD includes two components:

1. By saving The training history of an RL algorithm on many separate tasks generates a large multi-task data set;

#2. Transformer uses the previous learning history as its background to construct causality for actions. mold.

Because the policy continues to improve throughout the training process of the source RL algorithm, AD must learn how to improve the operator in order to accurately simulate the actions at any given point in the training history.

Most importantly, the Transformer's context size must be large enough (i.e. across epochs) to capture improvements in the training data.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

In the experimental part, in order to explore the advantages of AD in in-context RL capabilities, the researchers focused on the inability to pass zero after pre-training -shot generalizes to solve environments where each environment is required to support multiple tasks and the model cannot easily infer the solution to the task from observations. At the same time, episodes need to be short enough so that causal Transformers across episodes can be trained.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

As can be seen in the experimental results of the four environments Adversarial Bandit, Dark Room, Dark Key-to-Door, and DMLab Watermaze, through imitation Gradient-based RL algorithm, using a causal Transformer with a large enough context, AD can reinforce learning new tasks completely in context.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

AD can perform in-context exploration, temporal credit allocation and generalization. The algorithm of AD learning is better than the source of Transformer training. Data algorithms are more data efficient.

PPT explanation

In order to facilitate the understanding of the paper, Michael Laskin, one of the authors of the paper, published a ppt explanation on Twitter.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

# Experiments on algorithm distillation show that Transformer can independently improve the model through trial and error without updating weights, prompts, or fine-tuning. A single Transformer can collect its own data and maximize rewards on new tasks.

Although there are many successful models showing how Transformer learns in context, Transformer has not yet been proven to strengthen learning in context.

To adapt to new tasks, developers either need to manually specify a prompt or need to adjust the model.

Wouldn’t it be great if Transformer could adapt to reinforcement learning and be used out of the box?

But Decision Transformers or Gato can only learn strategies from offline data and cannot automatically improve through repeated experiments.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

Transformers generated using the pre-training method of algorithmic distillation (AD) can perform reinforcement learning in context.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

First train multiple copies of a reinforcement learning algorithm to solve different tasks and save the learning history.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

#Once the learning history data set is collected, a Transformer can be trained to predict previous learning history actions.

Since policies have improved historically, accurately predicting actions will force the Transformer to model policy improvements.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

The whole process is that simple. Transformer is only trained by imitating actions. There is no Q value like the common reinforcement learning model, and there is no long The operation-action-reward sequence also has no return conditions like DTs.

In context, reinforcement learning has no additional overhead, and the model is then evaluated by observing whether AD can maximize the reward for new tasks.

While Transformer explores, exploits, and maximizes returns in the context, its weights are frozen!

Expert Distillation (most similar to Gato), on the other hand, cannot explore nor maximize returns.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

AD can extract any RL algorithm. The researchers tried UCB and DQNA2C. An interesting finding is that in contextual RL algorithm learning, AD More data efficient.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

Users can also enter prompts and suboptimal demos, and the model will automatically improve the strategy until the optimal solution is obtained!

However, expert distillation ED can only maintain sub-optimal demo performance.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

Context RL will only appear when the context of the Transformer is long enough and spans multiple episodes.

AD requires a long enough history to perform effective model improvement and identification tasks.

Another revolution in reinforcement learning! DeepMind proposes algorithm distillation: an explorable pre-trained reinforcement learning Transformer

Through experiments, the researchers came to the following conclusions:

  • Transformer can be used in context Performing RL in
  • The contextual RL algorithm with AD is more efficient than the gradient-based source RL algorithm
  • AD improves the sub-optimal policy
  • in-context reinforcement learning arises from long-context imitation learning

The above is the detailed content of Another revolution in reinforcement learning! DeepMind proposes 'algorithm distillation': an explorable pre-trained reinforcement learning Transformer. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software