Robots are a technology with unlimited potential, especially with the support of intelligent technology. Recently, some large-scale models with revolutionary applications are considered to be possible intelligent brains for robots, which can help robots perceive and understand the world, and make decisions and plans. Recently, a joint team led by Yonatan Bisk of CMU and Fei Xia of Google DeepMind released a review report introducing the application and development of basic models in the field of robotics.
#Human beings have always dreamed of developing a robot that can adapt to different environments autonomously. However, realizing this dream is a long and challenging road.
In the past, robot perception systems usually used traditional deep learning methods, which required a large amount of labeled data to train supervised learning models. However, labeling large datasets through crowdsourcing is very costly.
In addition, classic supervised learning methods have certain limitations in their generalization capabilities. In order to apply these trained models to specific scenarios or tasks, careful design of domain adaptation technology is usually required, which often requires further data collection and annotation. Likewise, traditional robot planning and control methods also require accurate modeling of the dynamics of the environment, the agent itself, and other agents. These models are often built for a specific environment or task, and when conditions change, the model needs to be rebuilt. This shows that the transfer performance of classical models is also limited.
In fact, for many use cases, building effective models is either too expensive or simply impossible. Although deep (reinforcement) learning-based motion planning and control methods help alleviate these problems, they still suffer from distribution shift and reduced generalization ability.
Although there are many challenges in developing general-purpose robotic systems, the fields of natural language processing (NLP) and computer vision (CV) have made rapid progress recently, including for NLP Large language model (LLM), diffusion model for high-fidelity image generation, powerful visual model and visual language model for CV tasks such as zero-shot/few-shot generation.
The so-called "foundation model" is actually a large pre-training model (LPTM). They have powerful visual and verbal abilities. Recently, these models have also been applied in the field of robotics and are expected to give robotic systems open-world perception, task planning and even motion control capabilities. In addition to using existing vision and/or language basic models in the field of robotics, some research teams are developing basic models for robot tasks, such as action models for manipulation or motion planning models for navigation. These basic robot models demonstrate strong generalization capabilities and can adapt to different tasks and even specific solutions.
There are also researchers who directly use vision/language basic models for robot tasks, which shows the possibility of integrating different robot modules into a single unified model.
Although vision and language basic models have promising prospects in the field of robotics, and new robot basic models are also being developed, there are still many challenges in the field of robotics that are difficult to solve.
From the perspective of actual deployment, models are often unreproducible, unable to generalize to different robot forms (multi-embodied generalization) or difficult to accurately understand which behaviors in the environment is feasible (or acceptable). In addition, most research uses Transformer-based architecture, focusing on semantic perception of objects and scenes, task-level planning, and control. Other parts of the robot system are less studied, such as basic models for world dynamics or basic models that can perform symbolic reasoning. These require cross-domain generalization capabilities.
Finally, we also need more large-scale real-world data and high-fidelity simulators that support diverse robotic tasks.
This review paper summarizes the basic models used in the field of robotics, with the goal of understanding how the basic models can help solve or alleviate the core challenges in the field of robotics.
Figure 1 shows the main components of this review report.
Figure 2 gives the overall structure of this review.
Preliminary knowledge
In order to help readers better understand the content of this review, the team first provides A section of preparatory knowledge content.
They will first introduce the basics of robotics and the best current technologies. The main focus here is on methods used in the field of robotics before the era of basic models. Here is a brief explanation, please refer to the original paper for details.
- #The main components of the robot can be divided into three parts: perception, decision-making and planning, and action generation.
- The team divides robot perception into passive perception, active perception and state estimation.
- In the robot decision-making and planning section, the researchers introduced classic planning methods and learning-based planning methods.
- There are also classic control methods and learning-based control methods for machine action generation.
- Next, the team will introduce basic models and mainly focus on the fields of NLP and CV. The models involved include: LLM, VLM, visual basic model, and text conditional image generation model.
Challenges in Robotics
This section summarizes the five core challenges faced by different modules of a typical robotic system. Figure 3 shows the classification of these five challenges.
1. Generalization
Robot systems often have difficulty accurately sensing and understanding its environment. They also lack the ability to generalize training results on one task to another, which further limits their usefulness in the real world. In addition, due to different robot hardware, it is also difficult to transfer the model to different forms of robots. The generalization problem can be partially solved by using the base model for robots.
The further question of generalization to different robot forms remains to be answered.
2. Data Scarcity
In order to develop reliable robot models, large-scale high-quality data is crucial. Efforts are already underway to collect large-scale data sets from the real world, including automated values, robot operation trajectories, and more. And collecting robot data from human demonstrations is expensive. And due to the diversity of tasks and environments, the process of collecting sufficient and extensive data in the real world will be even more complicated. Additionally, there are security concerns surrounding collecting data in the real world.
To address these challenges, many research efforts have attempted to generate synthetic data in simulated environments. These simulations can provide a highly realistic virtual world, allowing robots to learn and use their skills in nearly real-life scenarios. However, using simulated environments also has limitations, particularly in terms of the variety of objects, which makes the skills learned difficult to directly transfer to real-world situations.
In addition, in the real world, it is very difficult to collect data on a large scale, and it is even more difficult to collect the Internet-scale image/text data used to train the basic model. .
One promising approach is collaborative data collection, which brings together data from different laboratory environments and robot types, as shown in Figure 4a. However, the team took an in-depth look at the Open-X Embodiment Dataset and discovered that there were some limitations in terms of data type availability.
To achieve a general-purpose agent, a key challenge is to understand the task specifications and ground them in the robot's current understanding of the world. Typically, these task specifications are provided by the user, who only has a limited understanding of the limitations of the robot's cognitive and physical capabilities. This raises many questions, including not only what best practices can be provided for these task specifications, but also whether drafting these specifications is natural and simple enough. It is also challenging to understand and resolve ambiguities in task specifications based on the robot's understanding of its capabilities.
5. Uncertainty and Safety
In order to deploy robots in the real world, a key challenge is dealing with the environment and task specifications inherent uncertainty. Depending on the source, uncertainty can be divided into epistemic uncertainty (uncertainty caused by lack of knowledge) and accidental uncertainty (noise inherent in the environment).
The cost of uncertainty quantification (UQ) may be so high that research and applications are unsustainable, and it may also prevent downstream tasks from being solved optimally. Given the massively over-parameterized nature of the underlying model, in order to achieve scalability without sacrificing model generalization performance, it is crucial to provide UQ methods that preserve the training scheme while changing the underlying architecture as little as possible. Designing robots that can provide reliable confidence estimates of their own behavior and, in turn, intelligently request clearly stated feedback remains an unsolved challenge.
Despite recent progress, ensuring that robots have the ability to learn from experience to fine-tune their strategies and stay safe in new environments remains challenging.
Overview of Current Research Methods
This section summarizes current research methods for base models of robots. The team divided the basic models used in the field of robotics into two major categories: basic models for robots and robot basic models (RFM).
The basic model used for robots mainly refers to using the visual and language basic models for robots in a zero-sample manner, which means that no additional fine-tuning or training is required. The robot base model may be warm-started using vision-language pre-training initialization and/or training the model directly on the robot dataset.
Figure 5 gives the classification details
1. Basic models of robots
This section focuses on the zero-sample application of basic vision and language models in the field of robotics. This mainly includes deploying VLM in a zero-shot manner into robot perception applications, using the contextual learning capabilities of LLM for task-level and motion-level planning and action generation. Figure 6 shows some representative research works.
2. Robot basic model (RFM)
As robotics datasets containing state-action pairs from real robots grow, so does the Robot Fundamental Model (RFM) category Success becomes more and more likely. These models feature the use of robotic data to train the model to solve robotic tasks.
This section will summarize and discuss the different types of RFM. The first is an RFM that can perform a type of task in a single robot module, which is also called a single-objective robot base model. For example, an RFM can generate low-level actions to control the robot or a model that can generate higher-level motion planning.
The RFM that can perform tasks in multiple robot modules will be introduced later, that is, a universal model that can perform perception, control and even non-robotic tasks.
3. How can basic models help solve robotics challenges?
The five major challenges facing the field of robotics are listed above. This section describes how basic models can help address these challenges.
All basic models related to visual information (such as VFM, VLM and VGM) can be used in the robot’s perception module. LLM, on the other hand, is more versatile and can be used for planning and control. The Robot Basic Model (RFM) is typically used in planning and action generation modules. Table 1 summarizes the underlying models for solving different robotics challenges.
As can be seen from the table, all basic models are good at generalizing the tasks of various robot modules. LLM is particularly good at task specification. RFM, on the other hand, is good at dealing with the challenges of dynamic models since most RFMs are model-free approaches. For robot perception, generalization ability and model challenges are coupled with each other, because if the perception model already has good generalization ability, there is no need to acquire more data to perform domain adaptation or additional fine-tuning.
In addition, there is a lack of research on security challenges, which will be an important future research direction.
Overview of current experiments and evaluations
This section summarizes the current research results on datasets, benchmarks, and experiments.
1. Datasets and Benchmarks
There are limitations to relying solely on knowledge learned from language and visual datasets. As some research results show, some concepts such as friction and weight cannot be easily learned through these modalities alone.
Therefore, in order to enable robotic agents to better understand the world, the research community is not only adapting basic models from the language and vision domains, but also advancing the development of training and fine-tuning these models. A large and diverse multi-modal robot dataset.
Currently these efforts are divided into two major directions: collecting data from the real world and collecting data from the simulated world and then migrating it to the real world. Each direction has its pros and cons. The datasets collected from the real world include RoboNet, Bridge Dataset V1, Bridge-V2, Language-Table, RT-1, etc. Commonly used simulators include Habitat, AI2THOR, Mujoco, AirSim, Arrival Autonomous Racing Simulator, Issac Gym, etc.
2. Evaluation analysis of current methods
Another major contribution of this team is to the papers mentioned in this review report The experiments were meta-analysed, which helped the authors clarify the following questions:
- What tasks were people studying to solve?
- What data sets or simulators were used to train the model? What are the robot platforms used for testing?
- What basic models are used by the research community? How effective is it in solving the task?
- Which base models are more commonly used among these methods?
Table 2-7 and Figure 11 give the analysis results.
##The team identified some key trends:
Research Community Unbalanced attention to robot operation tasks- Generalization ability and robustness need to be improved
- Exploration of low-level actions is very limited
- The control frequency is too low to be deployed in real
- Lack of unified testing benchmarks in robots
Discussion and future directions
The team summarized some Challenges to be solved and research directions worth discussing:
Setting a standard grounding for robot embodiment- Safety and Uncertainty
- Are the end-to-end approach and the modular approach incompatible?
- Adaptability to embodied physical changes
- World model approach or model-agnostic approach?
- New robotic platforms and multi-sensory information
- Continuous learning
- Standardization and reproducibility
The above is the detailed content of Robotics: How's the progress on the base model?. For more information, please follow other related articles on the PHP Chinese website!
![Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]](https://img.php.cn/upload/article/001/242/473/174717025174979.jpg?x-oss-process=image/resize,p_40)
ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

ChatGPT App: Unleash your creativity with the AI assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Chinese version
Chinese version, very easy to use

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Zend Studio 13.0.1
Powerful PHP integrated development environment

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool
