Home  >  Article  >  Technology peripherals  >  A new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model "SAM-E"

A new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model "SAM-E"

王林
王林Original
2024-06-05 16:09:27467browse
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

When we pick up a mechanical watch, we will see the dial from the front and hands, you can see the crown and bracelet from the side, and when you open the back of the watch, you can see the complex gears and movement. Each perspective provides different information that is combined to understand the overall three-dimensional view of the object being manipulated.

If you want the robot to learn to perform complex tasks in real life, you first need tomake the robot understand the properties of the operating object and the operated object, and the corresponding three-dimensional operating space , including object position, shape, occlusion relationship between objects, and the relationship between the object and the environment, etc.

Secondly, the robot needs to understand natural language instructions, carry out long-term planning and efficient execution of future actions. It is challenging to equip robots with the capabilities from environment perception to action prediction.

##Recently,
China Telecom Artificial Intelligence Research Institute (TeleAI) Professor Li Xuelong team jointly Shanghai Artificial Intelligence Laboratory, Tsinghua University and other units, simulates the human cognitive process of "perception-memory-thinking-imagination", and proposes a universal embodied operation algorithm driven by multi-perspective fusion, providing a feasible solution for robots to learn complex operations. The paper was accepted by International Machine Learning Conference ICML 2024, laying the foundation for building a general three-dimensional embodied strategy. The SAM-E video introduction is as follows: 具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
In recent years, the ability of visual basic models to understand images has developed rapidly. However, there are still many challenges in understanding three-dimensional space. Can use large visual models to help embodied agents understand three-dimensional operating scenes and enable them to complete various complex operating tasks in three-dimensional space? Inspired by the cognitive process of "Perception-Memory-Thinking-Imagination", the paper proposes a new embodied base model SAM-E based on the visual segmentation model Segment Anything (SAM) .

First of all, SAM-E has a powerful promptable "
perception
" ability, applying SAM's unique segmentation structure to the specific language instructions. In the personal task, the model pays attention to the operating objects in the scene by parsing text instructions.

Subsequently, a multi-view Transformer is designed to fuse and align depth features, image features and instruction features to achieve object "
memory
" and Operate "Thinking" to understand the three-dimensional operating space of the robotic arm.

Finally, a
new action sequence prediction network
is proposed to model action sequences of multiple time steps and "imagine" actions Instructions realize end-to-end output from three-dimensional scene perception to embodied actions.
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
  • Paper title: SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation
  • Paper link: https://sam-embodied.github.io/static/SAM-E.pdf
  • Project address: https://sam-embodied.github.io/

From two-dimensional perception to three-dimensional perception

In numbers In the tide of the times, with the rapid development of artificial intelligence technology, we are gradually entering a new era - the era of embodied intelligence. Giving an intelligent agent a body and the ability to directly interact with the real world has become one of the key directions of current research.

#To achieve this goal, the agent must have strong three-dimensional perception capabilities so that it can accurately understand the surrounding environment.

Traditional two-dimensional perception methods are inadequate when faced with complex three-dimensional space. How to enable embodied intelligence to master the accurate modeling ability of three-dimensional space through learning? has become a key issue that needs to be solved urgently.

Existing workRestore and reconstruct the three-dimensional space from multiple perspectives such as front view, top view, side view, etc. However, the calculations required The resources are relatively large, and the generalization ability in different scenarios is limited.

In order to solve this problem, this work explores a new approach-Applying the powerful generalization ability of large visual models to the three-dimensional representation of embodied agents Perceptual field.

SAM-E proposes to use SAM, a general visual large model with strong generalization ability, for visual perception. Through efficient fine-tuning in embodied scenes, it It has generalizable and promptable feature extraction capabilities, instance segmentation capabilities, complex scene understanding and other capabilities that can be effectively transferred to embodied scenes.

In order to further optimize the performance of the SAM base model, the concept of action sequence network is introduced, which can not only capture the prediction of a single action, but also deeply understand the relationship between consecutive actions. Internal connections can fully exploit the temporal information between actions, thereby further improving the base model's understanding and adaptability to embodied scenes.

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

#                                                                     Figure 1. SAM-E overall framework

SAM-E method

The core viewpoint of the SAM-E method mainly includes two aspects:

    Using SAM's prompt-driven structure, a powerful
  • base model
    is constructed, which has excellent generalization performance under task language instructions. Through LoRA fine-tuning technology, the model is adapted to specific tasks, further improving its performance.
  • Adopt
  • sequential action modeling technology
    to capture the timing information in the action sequence, better understand the dynamic changes of the task, and adjust the robot's strategy and execution in a timely manner This way, the robot can maintain a high execution efficiency.

##can prompt perception and fine-tuning
SAM- E The core lies in the network structure driven by task instruction prompts, including a powerful visual encoder and a lightweight decoder.
In the embodied scene
the task "prompt" is presented in the form of natural language
. As a task description instruction, the visual encoder exerts its prompting perceptual ability to extract and task Relevant characteristics. The policy network acts as a decoder and outputs actions based on the fused visual embedding and language instructions.

In the training phase, SAM-E uses LoRA for efficient fine-tuning

, which greatly reduces the training parameters and enables the basic vision model to quickly adapt to specific tasks.

Multi-perspective three-dimensional fusion

SAM-E introduces a multi-perspective Transformer network to fuse visual input from multiple perspectives and provide in-depth Understand three-dimensional space. Its work is divided into two stages: View-wise Attention and Cross-view Attention.

First, perform intra-view attention processing on multi-view features, and then combine multiple views and language descriptions for mixed-view attention to achieve multi-view information fusion. and image-language alignment.

Action sequence modeling

In the robot arm execution, the end effector Position and rotation usually show continuous and smooth changes. This feature allows for a close connection and continuity between adjacent actions. Based on this observation, a novel temporal smoothing hypothesis is proposed, aiming to fully utilize the intrinsic correlation between adjacent actions to achieve effective imitation learning of action sequences.

Specifically, the SAM-E framework captures patterns and relationships in action sequences through sequence modeling technology, providing an implicit prior knowledge for action prediction. And constrain the continuity of actions, thereby significantly improving the accuracy and consistency of action prediction.

In practical applications, SAM-E allows subsequent multi-step actions to be executed in one action prediction, greatly improving execution efficiency.

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

##                                                                                                      . ##                                                                                                                                                                                                                 Figure 4. Action sequence prediction network

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

##Experiment
The experiment uses a challenging collection of robotic arm tasks - RLBench, to conduct a comprehensive evaluation of 3D operation tasks under multi-view observation. The SAM-E model is significantly better than other traditional methods in many aspects. method.
Under
multi-task scenarios
, the SAM-E model significantly improves the mission success rate.

When
faced with the situation of migrating a small number of samples to new tasks
    , SAM-E effectively improved the new tasks by virtue of its strong generalization performance and efficient execution efficiency Performance.
  • ##                                                                                                                                                                    
6. 3D operation task example

## In addition, the action sequence of the action sequence is significantly improved The execution efficiency of SAM-E, at the same time, in the policy execution phase, compared to a single action, the execution of action sequences significantly reduces the number of model inferences. In the test, the corresponding task can even be completed through one model inference.
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
##                                                                                                                                                                                                                                  ​##SAM-E is equally effective in real robotic arm control, using two third-person cameras to capture multi-view visual input, with real-time inference capabilities on five real-world tasks.

##                                                                                                                                                                                                            
##Summary
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
This work pioneered a general approach based on multi-perspective fusion. Using embodied operation algorithms, visual segmentation of large models and multi-view fusion are used to achieve three-dimensional physical space perception of embodied agents.
Through efficient parameter fine-tuning, the pre-trained visual model is transferred to the specific scene, which can solve the complex 3D robot arm operation tasks of natural language instructions. In addition, the model can quickly generalize to new tasks by learning a small number of expert examples, showing superior training efficiency and action execution efficiency.
More importantly, SAM-E uses the cognitive link of "Perception-Memory-Thinking-Imagination
" to realize the process from data to End-to-end mapping of actions. Its significance lies not only in its application in embodied intelligence, but also in its inspiration for improving the cognitive ability of intelligence.

By simulating human perception and decision-making methods, intelligent agents can better understand and adapt to complex environments, thereby playing a greater role in a wider range of fields.

Team leader introduction:

Li Xuelong, CTO and chief scientist of China Telecom, artificial intelligence of China Telecom Director of the Intelligent Research Institute (TeleAI). Mainly focusing on artificial intelligence, local security, image processing, and embodied intelligence.

The above is the detailed content of A new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model "SAM-E". For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn