Home >Technology peripherals >AI >Climbing, jumping, and crossing narrow gaps, open source reinforcement learning strategies allow robot dogs to parkour
Parkour is an extreme sport. It is a huge challenge for robots, especially four-legged robot dogs, which need to quickly overcome various obstacles in complex environments. Some studies have attempted to use reference animal data or complex rewards, but these approaches generate parkour skills that are either diverse but blind, or vision-based but scene-specific. However, autonomous parkour requires robots to learn vision-based and diverse general skills to perceive various scenarios and respond quickly.
Recently, a video of a robot dog parkour went viral. The robot dog in the video quickly overcame various obstacles in a variety of scenarios. For example, pass through the gap under the iron plate, climb up the wooden box, and then jump to another wooden box. A series of actions are smooth and smooth:
This series of actions shows that the robot dog has mastered the three basic skills of crawling, climbing and jumping
It also has a special skill: it can squeeze through narrow gaps at an angle
If the robot dog fails to overcome the obstacle, it will try a few more times:
This content has been rewritten into Chinese: This robot dog is based on a "parkour" skill learning framework developed for low-cost robots. The framework was jointly proposed by researchers from Shanghai Qizhi Research Institute, Stanford University, ShanghaiTech University, CMU and Tsinghua University, and its research paper has been selected for CoRL 2023 (Oral). This research project has been open source
Paper address: https://arxiv.org/abs/2309.05665
Project address: https://github.com/ZiwenZhuang/parkour
This study launched a new Open source system for learning end-to-end vision-based parkour strategies to learn multiple parkour skills using simple rewards without any reference motion data.
Specifically, this research proposes a reinforcement learning method designed to allow robots to learn to climb high obstacles, jump over large gaps, crawl under low obstacles, and squeeze through Skills such as tight gaps and running, and translate these skills into parkour strategies based on a single vision. At the same time, these skills are transferred to quadruped robots by using an egocentric depth camera
To successfully deploy the parkour strategy proposed in this study on a low-cost robot, only Requires on-board computing (Nvidia Jetson), on-board depth cameras (Intel Realsense), and on-board power, without the need for motion capture, lidar, multiple depth cameras, and lots of computing
In order to train the parkour strategy, this research carried out the following three stages of work:
The first stage: reinforcement learning pre-training, with soft dynamic constraints. This research uses automatic courses to let the robot learn to cross obstacles, and encourages the robot to gradually learn to overcome obstacles
The second stage: reinforcement learning fine-tuning with hard dynamic constraints. The study enforces all dynamic constraints at this stage and uses realistic dynamics to fine-tune the robot's behavior learned in the pre-training stage.
The third stage: distillation. After learning each individual parkour skill, the study uses Dagger to distill them into a vision-based parkour policy (parameterized by an RNN) that can be deployed on a legged robot using only onboard perception and computation. .
In training, the The study set corresponding obstacle sizes for each skill, as shown in Table 1 below:
This study conducted a large number of simulations and real-life experiments , the results show that parkour strategies enable low-cost quadruped robots to autonomously select and perform appropriate parkour skills to traverse challenging open-world environments using only onboard computing, onboard visual sensing, and onboard power. , including climbing an obstacle of 0.40m (1.53x robot height), jumping over a large gap of 0.60m (1.5x robot length), crawling under a low obstacle of 0.2m (0.76x robot height), and squeezing through by tilting 0.28m thin gap (less than the width of the robot), and can keep running forward.
In addition, the study also compared the proposed method with several baseline methods, and Ablation experiments were performed in a simulated environment. The specific results are shown in Table 2:
Interested readers can read the original paper to learn more about the research content
The above is the detailed content of Climbing, jumping, and crossing narrow gaps, open source reinforcement learning strategies allow robot dogs to parkour. For more information, please follow other related articles on the PHP Chinese website!