search
HomeTechnology peripheralsAIGoogle DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

When developing robot learning methods, if large and diverse data sets can be integrated and combined with powerful expressive models (such as Transformer), then it is expected to develop generalization capabilities and widely applicable strategies so that robots can learn to handle a variety of different tasks well. For example, these strategies allow robots to follow natural language instructions, perform multi-stage behaviors, adapt to various environments and goals, and even apply to different robot forms.

However, the powerful models that have recently appeared in the field of robot learning are all trained using supervised learning methods. Therefore, the performance of the resulting strategy is limited by the extent to which human demonstrators can provide high-quality demonstration data. There are two reasons for this restriction.

  • First, we want robotic systems to be more proficient than human teleoperators, leveraging the full potential of the hardware to complete tasks quickly, smoothly, and reliably.
  • Second, we hope that the robot system will be better at automatically accumulating experience, rather than relying entirely on high-quality demonstrations.

In principle, reinforcement learning can provide these two abilities at the same time.

There have been some promising developments recently, showing that large-scale robot reinforcement learning can be successful in a variety of application scenarios, such as robots’ grabbing and stacking capabilities, and learning with human-specified Different tasks with rewards, learning multi-task strategies, learning goal-based strategies, and robot navigation. However, research shows that if reinforcement learning is used to train powerful models such as Transformer, it is more difficult to effectively instantiate at scale

Google DeepMind recently proposed Q-Transformer, which aims to Combining large-scale robot learning based on diverse real-world data sets with a modern policy architecture based on powerful Transformers

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

  • Thesis: https://q-transformer.github.io/assets/q-transformer.pdf
  • Project: https: //q-transformer.github.io/

Although in principle, using Transformer directly to replace the existing architecture ( Such as ResNets or smaller convolutional neural networks) are conceptually simple, but designing a scheme that can effectively utilize this architecture is very difficult. Large models are only effective when they can use large and diverse data sets - small, narrow models do not require and benefit from this ability

Although there have been previous studies using simulated data to create such datasets, the most representative data comes from the real world.

Therefore, DeepMind stated that the focus of this research is to utilize Transformer through offline reinforcement learning and integrate previously collected large data sets

Offline reinforcement Learning methods are trained using previously available data, with the goal of deriving the most effective possible strategy based on a given data set. Of course, this dataset can also be enhanced with additional automatically collected data, but the training process is separate from the data collection process, which provides an additional workflow for large-scale robotic applications

In terms of using the Transformer model to implement reinforcement learning, another big problem is designing a reinforcement learning system that can effectively train this model. Effective offline reinforcement learning methods usually perform Q-function estimation through time-difference updates. Since Transformer models a discrete token sequence, the Q function estimation problem can be converted into a discrete token sequence modeling problem, and an appropriate loss function can be designed for each token in the sequence.

The method adopted by DeepMind is a discretization scheme by dimension. This is to avoid the exponential explosion of the action base. Specifically, each dimension of the action space is treated as an independent time step in reinforcement learning. Different bins in the discretization correspond to different actions. This dimensionally discretized scheme allows us to use a simple discrete action Q-learning method with a conservative regularizer to handle distribution transitions

DeepMind proposes a specialized A regularizer that aims to minimize the value of unused actions. Research shows that this method can effectively learn a narrow range of demo-like data, and can also learn a wider range of data with exploration noise

Finally, they also used a hybrid update mechanism that combined Monte Carlo and n-step regression with temporal difference backups. The results show that this approach can improve the performance of Transformer-based offline reinforcement learning methods on large-scale robot learning problems.

The main contribution of this research is Q-Transformer, which is a method for offline reinforcement learning of robots based on the Transformer architecture. Q-Transformer tokenizes Q-values ​​by dimension and has been successfully applied to large-scale and diverse robotics datasets, including real-world data. Figure 1 shows the components of Q-Transformer

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

DeepMind conducted experimental evaluations, including simulation experiments and large-scale real-world experiments, aiming for rigorous comparison and actual verification. Among them, we adopted a large-scale text-based multi-task strategy for learning and verified the effectiveness of Q-Transformer

In real-world experiments, the data set they used contained 38,000 successful demonstrations and 20,000 failed automatically collected scenarios. The data was collected by 13 robots on more than 700 tasks. Q-Transformer outperforms previously proposed architectures for large-scale robotic reinforcement learning, as well as Transformer-based models such as the previously proposed Decision Transformer.

Method overview

In order to use Transformer for Q learning, DeepMind takes the approach of discretizing and autoregressive processing of the action space

To learn a Q function using TD learning, the classic method is based on the Bellman update rule

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

##The researchers modified the Bellman update so that it can be performed for each action dimension by converting the original MDP of the problem into an MDP in which each action dimension is treated as a step of Q-learning.

Specifically, for a given action dimension d_A, the new Bellman update rule can be expressed as:

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

This means that for each intermediate action dimension, maximize the next action dimension given the same state, and for the last action dimension, use the next state's An action dimension. This decomposition ensures that the maximization in the Bellman update remains tractable, while also ensuring that the original MDP problem can still be solved.

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

In order to take into account distribution changes during the offline learning process, DeepMind also introduced a simple regularization technology, It is to minimize the value of unseen actions.

In order to speed up learning, they also used the Monte Carlo return method. This approach not only uses return-to-go for a given episode, but also uses n-step returns that can skip dimensionally maximized

Experimental results

In experiments, DeepMind evaluated Q-Transformer, covering a range of real-world tasks. At the same time, they also limited the data to only 100 human demos per task

In the demos, in addition to the demos, they also added automatically collected failures Event fragments to create a dataset. This dataset contains 38,000 positive examples from the demo and 20,000 automatically collected negative examples

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

Compared to baseline methods such as RT-1, IQL, and Decision Transformer (DT), Q-Transformer can effectively utilize automatic event fragments to significantly improve its ability to use skills, including from the drawer Pick up and place objects, move objects near the target, and open and close drawers.

The researchers also tested the newly proposed method on a difficult simulated object retrieval task - in this task, only about 8% of the data were positive examples. The rest are noisy negative examples.

In this task, Q-learning methods such as QT-Opt, IQL, AW-Opt and Q-Transformer usually perform better because they are able to leverage dynamic programming to learn the policy, And use negative examples to optimize

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

Based on this object-retrieval task, the researchers conducted an ablation experiment and found that conservative Both the regularizer and MC return are important to maintain performance. Performance is significantly worse if you switch to the Softmax regularizer, as this restricts the policy too much to the data distribution. This shows that the regularizer selected by DeepMind here can better cope with this task.

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

Their ablation experiments on n-step return found that although this may introduce bias, this method can Achieve equivalent high performance in significantly fewer gradient steps, effectively handling many problems

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

The researchers also tried Run Q-Transformer on larger datasets. They expanded the number of positive examples to 115,000 and the number of negative examples to 185,000, resulting in a data set containing 300,000 event clips. Using this large dataset, Q-Transformer is still able to learn and perform even better than the RT-1 BC benchmark

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

Finally, they combined the Q-function trained by Q-Transformer as an affordance model with a language planner, similar to SayCan

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

The effect of Q-Transformer affordance estimation is due to the previous Q function trained using QT-Opt; if the unsampled tasks are re-labeled as negative examples of the current task during training, the effect It can be better. Since Q-Transformer does not require the sim-to-real training used by QT-Opt training, it is easier to use Q-Transformer if a suitable simulation is lacking.

In order to test the complete "planning execution" system, they experimented with using Q-Transformer for simultaneous availability estimation and actual policy execution, and the results showed that it was better than the previous QT-Opt Combined with RT-1.

Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world

It can be observed from the example of the task affordance value of the given image that the Q-Transformer in the downstream " High-quality affordance values ​​can be provided in the "Planning Execution" framework

Please read the original text for more details

The above is the detailed content of Google DeepMind: Combining large models with reinforcement learning to create an intelligent brain for robots to perceive the world. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor