Why does robotics lag so far behind natural language processing (NLP), vision and other artificial intelligence fields? Among other difficulties, data shortage is the main reason. In order to solve this problem, Google DeepMind and other institutions launched the open X-Embodiment data set and successfully trained a more powerful RT-X model
In 2023, when large models continue to make breakthroughs, research on embodied intelligent robots that use large models as brains to assist in operation is also rapidly advancing. More than 2 months ago, Google DeepMind launched the first vision-language-action (VLA) model to control robots-RT-2. This model allows the robot to not only interpret complex human instructions, but also to understand the object in front of it (even if the object has never been seen before) and take actions according to the instructions. For example, you ask the robot to pick up the "extinct animal" on the table. It will grab the dinosaur doll in front of it.
At the time, a Google executive said the RT-2 was a major leap forward in the way robots were built and programmed. "Due to this change, we had to rethink our entire research plan." What's even more surprising is that only more than two months have passed. DeepMind's robot model has improved again, and it has improved twice as fast. We know that robots are usually very specialized in doing a certain thing, but have poor general capabilities. Typically, you have to train a model for each task, robot, and environment. Changing a variable often requires starting from scratch. But what if we could combine knowledge from various robotics disciplines to create a way to train universal robots? This is what DeepMind has been doing for some time. They pooled data from 22 different robot types to create the Open X-Embodiment dataset, and then trained a more capable RT-X ( RT-1-X and RT-2-X respectively). They tested the RT-1-X model at five different research labs, and the results showed that the new method performed better than methods developed independently for each robot. The average success rate increased by 50% across five different commonly used robots. They also show that RT-2-X trained on the above dataset improves performance on real-world robotic skills by a factor of 2, and that by learning new data, RT-2-X masters many new skills. This work shows that a single model trained on data from multiple robot types performs significantly better on multiple robots than a model trained on data from a single robot type.
It is worth mentioning that this research was not completed by DeepMind independently, but was the result of their collaboration with 33 academic laboratories. They are committed to developing this technology in an open and responsible manner. Currently, the Open X-Embodiment dataset and RT-1-X model checkpoints are available to the broad research community. Jim Fan, senior artificial intelligence scientist at Nvidia, said today could be the ImageNet moment for robots.
Google researcher Karol Hausman also expressed the same sigh: The ImageNet moment of robots has finally arrived.
Open played a key role. Just as ImageNet advanced computer vision research, Open X-Embodiment also advanced robotics. Building diverse data sets has always been the key to training universal models. These trained models can control many different types of robots, follow different instructions, and perform complex tasks. Tasks perform basic reasoning and generalize efficiently. However, collecting such a dataset would be too resource-intensive for any single laboratory.
To this end, DeepMind collaborated with academic research laboratories at 33 institutions to build the Open X-Embodiment dataset. They collected data from 22 robot instances spanning more than 1 million clips demonstrating the robots' performance in more than 500 skills and 150,000 tasks. This dataset is the most comprehensive robotics dataset of its kind.
#RT-1-X: Success rate increased by 50%
RT-X is built based on two robotics transformer (RT) models become.
Specifically, they used RT-1 to train RT-1-X, where RT-1 is a 35M parameter network built on the Transformer architecture and is designed for robot control The design is shown in Figure 3. Additionally, they trained RT-2-X on RT-2, a series of large-scale visual language action models (VLA) that perform at Internet scale It is trained on visual and language data as well as robot control data.
To evaluate RT-1-X, DeepMind compared it to models developed on specific tasks, such as opening doors. The results show that RT-1-X trained using the Open X-Embodiment dataset outperforms the original model by 50% on average.
The average success rate of RT-1-X is 50% higher than the original method. ## Series from different cooperation agencies about RT-1-X. -X: Barrier-free unlocking of new skills
In order to study the knowledge transfer capabilities of RT-X, DeepMind conducted other experiments. These experiments involved objects and skills that were not present in the RT-2 dataset, but were present in another robot's dataset. Results showed that the RT-2-X was three times more successful at mastering new skills than its previous best model, the RT-2. This also illustrates that joint training with data from other platforms can give RT-2-X additional skills not present in the original data set, allowing it to perform novel tasks. A series of results show that RT-2-X achieves skills previously unachievable with RT-2, including better handling of space understanding. For example, if we ask the robot to "move the apple near the cloth", or ask the robot to "move the apple to the cloth", in order to achieve the goal requirements, the robot will take completely different trajectories. Simply change the preposition from "near" to "on" to adjust the actions taken by the robot.
RT-2-X shows that incorporating data from other robots into RT-2-X training can improve the robot's task performance range, but only if the use of high enough Capacity architecture.
Research inspiration: Robots need to learn from each other, and so do researchers
Robot research It’s in the exciting early stages. This new research from DeepMind shows that by scaling learning with more diverse data and better models, it may be possible to develop more useful assistive robots. Collaborating and sharing resources with labs around the world is critical to advancing robotics research in an open and responsible manner. DeepMind hopes to reduce barriers and accelerate research by opening up data sources and providing secure but limited models. The future of robotics depends on robots learning from each other and, most importantly, allowing researchers to learn from each other. This work proves that the model can be generalized in different environments, and its performance is consistent whether on robots at Google DeepMind or on robots at different universities around the world. has been significantly improved. Future research could explore how to combine these advances with RoboCat’s self-improvement properties, allowing the model to continuously improve based on its own experience. Another future direction is to further explore how mixing different data sets affects cross-embodied agent generalization, and how this generalization is achieved.
If you want to know more about RT-X, you can refer to this paper published by DeepMind: Paper link: https://robotics-transformer-x.github.io/paper.pdfProject link: https://robotics-transformer -x.github.io/
-
Reference link: https://www.deepmind.com/blog/scaling -up-learning-across-many-different-robot-types
The above is the detailed content of Deep learning giant DeepMind has made breakthrough progress on the ImageNet dataset, bringing a new milestone to robotics research. For more information, please follow other related articles on the PHP Chinese website!