Home  >  Article  >  Technology peripherals  >  Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

WBOY
WBOYforward
2023-04-12 23:55:091432browse

​The robot with a built-in large model has learned to follow language instructions to reach its destination without looking at a map. This achievement comes from the new work of reinforcement learning expert Sergey Levine.

Given a destination, how difficult is it to reach it smoothly without navigation tracks?

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

This task is also very challenging for humans with poor sense of direction. But in a recent study, several academics "taught" the robot using only three pre-trained models.

We all know that one of the core challenges of robot learning is to enable robots to perform a variety of tasks according to high-level human instructions. This requires robots that can understand human instructions and be equipped with a large number of different actions to carry out these instructions in the real world.

For instruction following tasks in navigation, previous work has mainly focused on learning from trajectories annotated with textual instructions. This may enable understanding of textual instructions, but the cost of data annotation has hindered widespread use of this technique. On the other hand, recent work has shown that self-supervised training of goal-conditioned policies can learn robust navigation. These methods are based on large, unlabeled datasets, with post hoc relabeling to train vision-based controllers. These methods are scalable, general, and robust, but often require the use of cumbersome location- or image-based target specification mechanisms.

In a latest paper, researchers from UC Berkeley, Google and other institutions aim to combine the advantages of these two methods to make a self-supervised system for robot navigation applicable to navigation data without any user annotations. , leveraging the ability of pre-trained models to execute natural language instructions. Researchers use these models to build an "interface" that communicates tasks to the robot. This system leverages the generalization capabilities of pre-trained language and vision-language models to enable robotic systems to accept complex high-level instructions.

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

  • Paper link: https://arxiv.org/pdf/2207.04429.pdf
  • Code link: https://github.com/blazejosinski/lm_nav

The researchers observed that it is possible to leverage off-the-shelf pre-trained models trained on large corpora of visual and language datasets ( These corpora are widely available and show zero-shot generalization capabilities) to create interfaces that enable specific instruction tracking. To achieve this, the researchers combined the advantages of vision and language robot-agnostic pre-trained models as well as pre-trained navigation models. Specifically, they used a visual navigation model (VNM:ViNG) to create a robot's visual output into a topological "mental map" of the environment. Given a free-form text instruction, a pre-trained large language model (LLM: GPT-3) is used to decode the instruction into a series of text-form feature points. Then, a visual language model (VLM: CLIP) is used to establish these text feature points in the topological map by inferring the joint likelihood of feature points and nodes. A new search algorithm is then used to maximize the probabilistic objective function and find the robot's instruction path, which is then executed by the VNM. The main contribution of the research is the navigation method under large-scale models (LM Nav), a specific instruction tracking system. It combines three large independent pre-trained models - a self-supervised robot control model that leverages visual observations and physical actions (VNM), a visual language model that places images within text but without a concrete implementation environment (VLM), and a large language model that parses and translates text but has no visual basis or embodied sense (LLM) to enable long-view instruction tracking in complex real-world environments. For the first time, researchers instantiated the idea of ​​combining pre-trained vision and language models with target-conditional controllers to derive actionable instruction paths in the target environment without any fine-tuning. Notably, all three models are trained on large-scale datasets, have self-supervised objective functions, and are used out-of-the-box without fine-tuning - training LM Nav does not require human annotation of robot navigation data.

Experiments show that LM Nav is able to successfully follow natural language instructions in a new environment while using fine-grained commands to remove path ambiguity during complex suburban navigation up to 100 meters.

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

​LM-Nav model overview

So, how do researchers use pre-trained image and language models to provide text interfaces for visual navigation models?

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

​1. Given a set of observations in the target environment, use the target conditional distance function, which is the visual navigation model (VNM) part, infer the connectivity between them, and build a topological map of the connectivity in the environment.

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

## 2. Large language model (LLM) is used to parse natural language instructions into a series of feature points, these Feature points can be used as intermediate sub-goals for navigation.

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

3. Visual-language model (VLM) is used to establish visual observations based on feature point phrases. The vision-language model infers a joint probability distribution over the feature point descriptions and images (forming the nodes in the graph above).

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

​4. Using the probability distribution of VLM and the graph connectivity inferred by VNM, adopts a novel search algorithm , retrieve an optimal instruction path in the environment, which (i) satisfies the original instruction and (ii) is the shortest path in the graph that can achieve the goal.

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

5. Then, The instruction path is executed by the target condition policy, which is part of the VNM. ​

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

Experimental results

Qualitative evaluation

Figure 4 shows some examples of paths taken by the robot (Note that the robot cannot obtain the image above the head and the spatial positioning of the feature points, and what is displayed is only the visual effect).

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

In Figure 4(a), LM-Nav is able to successfully locate simple feature points from its previous traversals and find a path to the goal. short path. Although there are multiple parking feature points in the environment, the objective function in Equation 3 enables the robot to select the correct parking feature point in the context, thereby minimizing the overall travel distance.

Figure 4(b) emphasizes the ability of LM-Nav to parse specified routes with multiple feature points—even though directly reaching the last feature point is the shortest route when ignoring the instruction path, the robot still A path that visits all feature points in the correct order can be found.

Use directives to disambiguate. Since the goal of LM Nav is to follow instructions, not just reach the final goal, different instructions may result in different traversals. Figure 5 shows an example where modifying instructions can disambiguate multiple paths to a goal. For shorter prompts (blue), LM Nav prefers the more direct path. When specifying a more fine-grained route (magenta), LM Nav takes alternative paths through different sets of feature points.

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

​The situation of missing feature points. Although LM-Nav can effectively parse feature points in instructions, locate them on the graph, and find the path to the goal, this process relies on the assumption that feature points (i) exist in the real environment, and (ii) can be recognized by VLM. Figure 4(c) shows a situation where the executable path fails to visit one of the feature points—a fire hydrant—and takes a path around the top of the building instead of the bottom. This failure case was due to the VLM's inability to detect fire hydrants from the robot's observations.

In independently evaluating the efficacy of VLM in retrieving feature points, the researchers found that although it is the best off-the-shelf model for this type of task, CLIP is unable to retrieve a small number of "hard" feature points, including Fire hydrants and cement mixers. But in many real-world situations, the robot can still successfully find a path to visit the remaining feature points.

Quantitative Evaluation

Table 1 summarizes the quantitative performance of the system in 20 instructions. In 85% of the experiments, LM-Nav was able to consistently follow instructions without collisions or detachments (an average of one intervention every 6.4 kilometers of travel). Compared to the baseline without navigation model, LM-Nav consistently performs better in executing efficient, collision-free target paths. In all unsuccessful experiments, the failure can be attributed to insufficient capabilities in the planning phase—the inability of the search algorithm to intuitively locate certain “hard” feature points in the graph—resulting in incomplete execution of instructions. An investigation of these failure modes revealed that the most critical part of the system is the VLM's ability to detect unfamiliar feature points, such as fire hydrants, and scenes under challenging lighting conditions, such as underexposed images.

Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way

The above is the detailed content of Reinforcement learning guru Sergey Levine’s new work: Three large models teach robots to recognize their way. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete