search
HomeTechnology peripheralsAIA new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model 'SAM-E'

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

When we pick up a mechanical watch, we will see the dial from the front and hands, you can see the crown and bracelet from the side, and when you open the back of the watch, you can see the complex gears and movement. Each perspective provides different information that is combined to understand the overall three-dimensional view of the object being manipulated.

If you want the robot to learn to perform complex tasks in real life, you first need tomake the robot understand the properties of the operating object and the operated object, and the corresponding three-dimensional operating space , including object position, shape, occlusion relationship between objects, and the relationship between the object and the environment, etc.

Secondly, the robot needs to understand natural language instructions, carry out long-term planning and efficient execution of future actions. It is challenging to equip robots with the capabilities from environment perception to action prediction.

##Recently,
China Telecom Artificial Intelligence Research Institute (TeleAI) Professor Li Xuelong team jointly Shanghai Artificial Intelligence Laboratory, Tsinghua University and other units, simulates the human cognitive process of "perception-memory-thinking-imagination", and proposes a universal embodied operation algorithm driven by multi-perspective fusion, providing a feasible solution for robots to learn complex operations. The paper was accepted by International Machine Learning Conference ICML 2024, laying the foundation for building a general three-dimensional embodied strategy. The SAM-E video introduction is as follows: 具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
In recent years, the ability of visual basic models to understand images has developed rapidly. However, there are still many challenges in understanding three-dimensional space. Can use large visual models to help embodied agents understand three-dimensional operating scenes and enable them to complete various complex operating tasks in three-dimensional space? Inspired by the cognitive process of "Perception-Memory-Thinking-Imagination", the paper proposes a new embodied base model SAM-E based on the visual segmentation model Segment Anything (SAM) .

First of all, SAM-E has a powerful promptable "
perception
" ability, applying SAM's unique segmentation structure to the specific language instructions. In the personal task, the model pays attention to the operating objects in the scene by parsing text instructions.

Subsequently, a multi-view Transformer is designed to fuse and align depth features, image features and instruction features to achieve object "
memory
" and Operate "Thinking" to understand the three-dimensional operating space of the robotic arm.

Finally, a
new action sequence prediction network
is proposed to model action sequences of multiple time steps and "imagine" actions Instructions realize end-to-end output from three-dimensional scene perception to embodied actions.
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
  • Paper title: SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation
  • Paper link: https://sam-embodied.github.io/static/SAM-E.pdf
  • Project address: https://sam-embodied.github.io/

From two-dimensional perception to three-dimensional perception

In numbers In the tide of the times, with the rapid development of artificial intelligence technology, we are gradually entering a new era - the era of embodied intelligence. Giving an intelligent agent a body and the ability to directly interact with the real world has become one of the key directions of current research.

#To achieve this goal, the agent must have strong three-dimensional perception capabilities so that it can accurately understand the surrounding environment.

Traditional two-dimensional perception methods are inadequate when faced with complex three-dimensional space. How to enable embodied intelligence to master the accurate modeling ability of three-dimensional space through learning? has become a key issue that needs to be solved urgently.

Existing workRestore and reconstruct the three-dimensional space from multiple perspectives such as front view, top view, side view, etc. However, the calculations required The resources are relatively large, and the generalization ability in different scenarios is limited.

In order to solve this problem, this work explores a new approach-Applying the powerful generalization ability of large visual models to the three-dimensional representation of embodied agents Perceptual field.

SAM-E proposes to use SAM, a general visual large model with strong generalization ability, for visual perception. Through efficient fine-tuning in embodied scenes, it It has generalizable and promptable feature extraction capabilities, instance segmentation capabilities, complex scene understanding and other capabilities that can be effectively transferred to embodied scenes.

In order to further optimize the performance of the SAM base model, the concept of action sequence network is introduced, which can not only capture the prediction of a single action, but also deeply understand the relationship between consecutive actions. Internal connections can fully exploit the temporal information between actions, thereby further improving the base model's understanding and adaptability to embodied scenes.

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

#                                                                     Figure 1. SAM-E overall framework

SAM-E method

The core viewpoint of the SAM-E method mainly includes two aspects:

    Using SAM's prompt-driven structure, a powerful
  • base model
    is constructed, which has excellent generalization performance under task language instructions. Through LoRA fine-tuning technology, the model is adapted to specific tasks, further improving its performance.
  • Adopt
  • sequential action modeling technology
    to capture the timing information in the action sequence, better understand the dynamic changes of the task, and adjust the robot's strategy and execution in a timely manner This way, the robot can maintain a high execution efficiency.

##can prompt perception and fine-tuning
SAM- E The core lies in the network structure driven by task instruction prompts, including a powerful visual encoder and a lightweight decoder.
In the embodied scene
the task "prompt" is presented in the form of natural language
. As a task description instruction, the visual encoder exerts its prompting perceptual ability to extract and task Relevant characteristics. The policy network acts as a decoder and outputs actions based on the fused visual embedding and language instructions.

In the training phase, SAM-E uses LoRA for efficient fine-tuning

, which greatly reduces the training parameters and enables the basic vision model to quickly adapt to specific tasks.

Multi-perspective three-dimensional fusion

SAM-E introduces a multi-perspective Transformer network to fuse visual input from multiple perspectives and provide in-depth Understand three-dimensional space. Its work is divided into two stages: View-wise Attention and Cross-view Attention.

First, perform intra-view attention processing on multi-view features, and then combine multiple views and language descriptions for mixed-view attention to achieve multi-view information fusion. and image-language alignment.

Action sequence modeling

In the robot arm execution, the end effector Position and rotation usually show continuous and smooth changes. This feature allows for a close connection and continuity between adjacent actions. Based on this observation, a novel temporal smoothing hypothesis is proposed, aiming to fully utilize the intrinsic correlation between adjacent actions to achieve effective imitation learning of action sequences.

Specifically, the SAM-E framework captures patterns and relationships in action sequences through sequence modeling technology, providing an implicit prior knowledge for action prediction. And constrain the continuity of actions, thereby significantly improving the accuracy and consistency of action prediction.

In practical applications, SAM-E allows subsequent multi-step actions to be executed in one action prediction, greatly improving execution efficiency.

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

##                                                                                                      . ##                                                                                                                                                                                                                 Figure 4. Action sequence prediction network

具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」

##Experiment
The experiment uses a challenging collection of robotic arm tasks - RLBench, to conduct a comprehensive evaluation of 3D operation tasks under multi-view observation. The SAM-E model is significantly better than other traditional methods in many aspects. method.
Under
multi-task scenarios
, the SAM-E model significantly improves the mission success rate.

When
faced with the situation of migrating a small number of samples to new tasks
    , SAM-E effectively improved the new tasks by virtue of its strong generalization performance and efficient execution efficiency Performance.
  • ##                                                                                                                                                                    
6. 3D operation task example

## In addition, the action sequence of the action sequence is significantly improved The execution efficiency of SAM-E, at the same time, in the policy execution phase, compared to a single action, the execution of action sequences significantly reduces the number of model inferences. In the test, the corresponding task can even be completed through one model inference.
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
##                                                                                                                                                                                                                                  ​##SAM-E is equally effective in real robotic arm control, using two third-person cameras to capture multi-view visual input, with real-time inference capabilities on five real-world tasks.

##                                                                                                                                                                                                            
##Summary
具身智能体三维感知新链条,TeleAI &上海AI Lab提出多视角融合具身模型「SAM-E」
This work pioneered a general approach based on multi-perspective fusion. Using embodied operation algorithms, visual segmentation of large models and multi-view fusion are used to achieve three-dimensional physical space perception of embodied agents.
Through efficient parameter fine-tuning, the pre-trained visual model is transferred to the specific scene, which can solve the complex 3D robot arm operation tasks of natural language instructions. In addition, the model can quickly generalize to new tasks by learning a small number of expert examples, showing superior training efficiency and action execution efficiency.
More importantly, SAM-E uses the cognitive link of "Perception-Memory-Thinking-Imagination
" to realize the process from data to End-to-end mapping of actions. Its significance lies not only in its application in embodied intelligence, but also in its inspiration for improving the cognitive ability of intelligence.

By simulating human perception and decision-making methods, intelligent agents can better understand and adapt to complex environments, thereby playing a greater role in a wider range of fields.

Team leader introduction:

Li Xuelong, CTO and chief scientist of China Telecom, artificial intelligence of China Telecom Director of the Intelligent Research Institute (TeleAI). Mainly focusing on artificial intelligence, local security, image processing, and embodied intelligence.

The above is the detailed content of A new chain of three-dimensional perception of embodied intelligence, TeleAI & Shanghai AI Lab proposed a multi-perspective fusion embodied model 'SAM-E'. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
An AI Space Company Is BornAn AI Space Company Is BornMay 12, 2025 am 11:07 AM

This article showcases how AI is revolutionizing the space industry, using Tomorrow.io as a prime example. Unlike established space companies like SpaceX, which weren't built with AI at their core, Tomorrow.io is an AI-native company. Let's explore

10 Machine Learning Internships in India (2025)10 Machine Learning Internships in India (2025)May 12, 2025 am 10:47 AM

Land Your Dream Machine Learning Internship in India (2025)! For students and early-career professionals, a machine learning internship is the perfect launchpad for a rewarding career. Indian companies across diverse sectors – from cutting-edge GenA

Try Fellou AI and Say Goodbye to Google and ChatGPTTry Fellou AI and Say Goodbye to Google and ChatGPTMay 12, 2025 am 10:26 AM

The landscape of online browsing has undergone a significant transformation in the past year. This shift began with enhanced, personalized search results from platforms like Perplexity and Copilot, and accelerated with ChatGPT's integration of web s

Personal Hacking Will Be A Pretty Fierce BearPersonal Hacking Will Be A Pretty Fierce BearMay 11, 2025 am 11:09 AM

Cyberattacks are evolving. Gone are the days of generic phishing emails. The future of cybercrime is hyper-personalized, leveraging readily available online data and AI to craft highly targeted attacks. Imagine a scammer who knows your job, your f

Pope Leo XIV Reveals How AI Influenced His Name ChoicePope Leo XIV Reveals How AI Influenced His Name ChoiceMay 11, 2025 am 11:07 AM

In his inaugural address to the College of Cardinals, Chicago-born Robert Francis Prevost, the newly elected Pope Leo XIV, discussed the influence of his namesake, Pope Leo XIII, whose papacy (1878-1903) coincided with the dawn of the automobile and

FastAPI-MCP Tutorial for Beginners and Experts - Analytics VidhyaFastAPI-MCP Tutorial for Beginners and Experts - Analytics VidhyaMay 11, 2025 am 10:56 AM

This tutorial demonstrates how to integrate your Large Language Model (LLM) with external tools using the Model Context Protocol (MCP) and FastAPI. We'll build a simple web application using FastAPI and convert it into an MCP server, enabling your L

Dia-1.6B TTS : Best Text-to-Dialogue Generation Model - Analytics VidhyaDia-1.6B TTS : Best Text-to-Dialogue Generation Model - Analytics VidhyaMay 11, 2025 am 10:27 AM

Explore Dia-1.6B: A groundbreaking text-to-speech model developed by two undergraduates with zero funding! This 1.6 billion parameter model generates remarkably realistic speech, including nonverbal cues like laughter and sneezes. This article guide

3 Ways AI Can Make Mentorship More Meaningful Than Ever3 Ways AI Can Make Mentorship More Meaningful Than EverMay 10, 2025 am 11:17 AM

I wholeheartedly agree. My success is inextricably linked to the guidance of my mentors. Their insights, particularly regarding business management, formed the bedrock of my beliefs and practices. This experience underscores my commitment to mentor

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools