Home > Article > Technology peripherals > UCSD, MIT and other Chinese teams teach robot dogs to perceive the 3D world! With the M1 chip, you can climb stairs and cross obstacles.
Recently, researchers from UCSD, IAIFI and MIT institutions used a new neural volumetric memory architecture (NVM) to teach a robot dog to perceive the three-dimensional world.
Using this technology, the robot dog can climb stairs, cross gaps, climb obstacles, etc. through a single neural network - completely autonomously, without the need for a remote control.
I wonder if you’ve noticed the white box on the dog’s back?
It is equipped with Apple’s M1 chip, which is responsible for running the visual processing tasks of the robot dog. Moreover, the team removed it from a Mac.
It is not difficult to see that this robot dog from MIT can easily climb a section of branches in front of it without any effort (basically).
As we all know, it is very difficult for robot dogs and other legged robots to cross uneven roads.
The more complex the road conditions are, the more obstacles there are that cannot be seen.
In order to solve the problem of "partially observable environment", SOTA's current visual-motion technology connects image channels through frame-stacking.
However, this simple processing method lags far behind current computer vision technology, which can explicitly model optical flow and specific 3D geometries.
Inspired by this, the team proposed a neural volume memory architecture (NVM) that can fully take into account the SE(3) equivalence of the three-dimensional world.
## Project address: https://rchalyang.github.io/NVM/
Unlike previous methods, NVM is a volumetric format. It aggregates feature volumes from multiple camera views into the robot's egocentric frame, allowing the robot to better understand its surroundings.
The test results show that after using neural volumetric memory (NVM) to train leg movements, the robot's performance on complex terrain is significantly better than previous technologies.
In addition, the results of the ablation experiments show that the content stored in the neural volumetric memory captures enough geometric information to reconstruct the 3D scene.
To validate in different real-world scenarios outside of simulation, the team conducted experiments in indoor and outdoor scenarios All experiments were conducted.
When the robot dog finds that an obstacle suddenly appears in front of it, it will directly choose to avoid it.
There seems to be no problem walking on the rocky ground, although it is still more laborious than on flat ground. Some.
################################################################################################################################################## It's possible to overcome obstacles that are relatively big compared to yourself, but you can still overcome them if you work hard. ######
Using the previous recognition control technology, the puppy’s hind legs obviously made errors in judging the distance. It stepped into a ditch and overturned, which failed.
After adopting the NVM proposed by MIT, the puppy crossed the ditch, stable happiness, success!
# Using the previous recognition control technology, the puppy missed the mark with its first kick, and the dog's head grabbed the ground and failed.
After adopting the NVM proposed by MIT, the puppy walked smoothly through the matrix.
Volume memory for leg movements
Using an egocentric camera perspective is essentially a problem of dealing with "partially-observable environment" (Partially-Observed).
To make the control problem concrete, the robot needs to gather information from previous frames and correctly infer occluded terrain.
During movement, the camera mounted directly on the robot chassis undergoes drastic and sudden position changes.
In this way, in the process of characterizing a series of pictures, it becomes very important that a single frame can be placed in the correct position.
To this end, the concept of neural volume memory (NVM) proposed by the team can convert a series of input visual information into scene features for 3D depiction and then output.
Although "behavioral cloning goal" Sufficient to generate a good policy, but targeting equivariance of translation and rotation, automatically provides an independent, self-supervised learning objective for neural volumetric memory.
Self-supervised learning: The research team trained an independent decoder. Let it predict visual observations in different frames by taking a visual observation and estimating the transition between two frames.
#As shown in the image above, it can be assumed that the surrounding 3D scene remains unchanged between frames. Since the camera is looking forward, we can normalize the feature volume from previous frames and use it to predict subsequent images.
The first image shows the robot moving in the environment, and the second image is the input visual observation, The third picture is the visual observation effect using 3D feature volume and estimated picture synthesis.
For the visual observation of the input, the research team applied a large number of data enhancements to the images to improve the robustness of the model.
##Introduction of the authorRuihan Yan
## Ruihan Yan is a second-year doctoral student at the University of California, San Diego. Before that, he obtained a bachelor's degree in software engineering from Nankai University in 2019His research interests are reinforcement learning, machine learning, robotics, etc. Specifically, he wants to build intelligent agents that use information from different sources to make decisions.
Ge Yang
##Ge YangUndergraduate He graduated from Yale University in physics and mathematics and received his PhD in physics from the University of Chicago. Currently, he is a postdoctoral researcher at the National Science Foundation's Institute for Artificial Intelligence and Fundamental Interactions (IAIFI).Ge Yang’s research involves two sets of related issues. The first group is to improve learning by revisiting the way we represent knowledge in neural networks and how knowledge is transferred across distributions. The second group looks at reinforcement learning through the lens of theoretical tools such as neural tangent kernels, non-Euclidean geometry, and Hamiltonian dynamics.
Xiaolong Wang
##Xiaolong Wang is An Assistant Professor in the ECE Department at UC San Diego. He is a member of the robotics team at the TILOS National Science Foundation Institute for Artificial Intelligence.
He received his PhD in robotics from Carnegie Mellon University and did postdoctoral research at the University of California, Berkeley.
The above is the detailed content of UCSD, MIT and other Chinese teams teach robot dogs to perceive the 3D world! With the M1 chip, you can climb stairs and cross obstacles.. For more information, please follow other related articles on the PHP Chinese website!