Home >Technology peripherals >AI >New Range View3D detection idea: RangePerception
Title Rewritten: Range Awareness: Taming the LiDAR Range View for Efficient and Accurate 3D Object Detection
Please click the following link to view the paper: https://openreview.net/pdf?id=9kFQEJSyCM
Author affiliation: Shanghai Artificial Intelligence Laboratory Fudan University After rewriting: Unit: Shanghai Artificial Intelligence Laboratory Fudan University
Currently, LiDAR-based 3D detection The method mainly uses bird's eye view (BEV) or range view (RV) as the basis. BEV methods rely on voxelization and 3D convolution, which results in less efficient training and inference processes. In contrast, the RV method shows higher efficiency due to its compactness and compatibility with 2D convolutions, but its performance still lags behind the BEV method. To eliminate the performance gap while maintaining the efficiency of the RV method, this study proposes an efficient and accurate RV-based 3D object detection framework called RangePerception. Through careful analysis, this study identifies two key challenges that hinder the performance of existing RV methods: 1) There is a natural domain gap between the 3D world coordinates used in the output and the 2D range image coordinates used in the input, which leads to a change from range It becomes difficult to extract information from images; 2) The original range image has visual damage problems, which affects the detection accuracy of targets located at the edge of the range image. To address these challenges, this paper proposes two novel algorithms named Range Aware Kernel (RAK) and Vision Restoration Module (VRM), which facilitate the information flow of range image representation and world coordinate 3D detection results. With the help of RAK and VRM, RangePerception improves the average L1/L2 AP by 3.25/4.18 over the previous state-of-the-art RV method RangeDet on Waymo Open Dataset. RangePerception is the first RV-based 3D detection method. Compared with the well-known BEV-based method CenterPoint, its average AP is slightly higher, and the inference speed is 1.3 times that of CenterPoint
This paper demonstrates an efficient and accurate RV-based 3D detection framework called RangePerception. To overcome the above key challenges, two novel algorithms named Range Aware Kernel (RAK) and Vision Restoration Module (VRM) are proposed and integrated into the RangePerception framework, both of which facilitate range image representation and world coordinate 3D Information flow of test results. With the help of RAK and VRM, our RangePerception achieves state-of-the-art performance as a range-view-based 3D detection method by delivering 73.62, 80.24 and 70.33 L1 3D AP on WOD for vehicle & pedestrian & cyclist. The contributions of this article are as follows.
RangePerception Framework. This article introduces a novel high-performance 3D detection framework called RangePerception. RangePerception is the first RV-based 3D detector, achieving an average L1/L2 AP of 74.73/69.17 on WOD, which is better than the previous state-of-the-art RV-based detector RangeDet, which has an average L1/L2 AP of 71.48/ 64.99, showing an improvement of 3.25/4.18. RangePerception also shows slightly superior performance compared to the widely used BEV-based method CenterPoint [6], which achieves an average L1/L2 AP of 74.25/68.04. Notably, RangePerception’s inference speed is 1.3 times faster than CenterPoint’s, proving it to be more suitable for real-time deployment on autonomous vehicles.
Range Aware Kernel. Part of the RangePerception feature extractor, the Range Aware Kernel (RAK) is a groundbreaking algorithm tailored for RV-based networks. RAK decomposes the range image space into multiple subspaces and overcomes the spatial misalignment issue by independently extracting features from each subspace. Experimental results show that RAK improves the average L1/L2 AP by 5.75/5.99 with negligible computational cost.
Vision Restoration Module. In order to solve the problem of visual damage (Vision Corruption issue), this research proposes the Vision Restoration Module (VRM). VRM expands the receptive field of the backbone network by restoring previously damaged areas. VRM is particularly helpful for vehicle detection, as shown in the experimental section.
Figure 2: The RangePerception framework takes a range image I as input and generates dense predictions. In order to improve the representation learning effect, the framework integrates VRM and RAK modules in sequence before Range Backbone. Then, a specially designed Redundancy Pruner is used to eliminate redundancy in deep features, thereby reducing the computational cost of subsequent Region Proposal Network and post-processing layers
Figure 1: (a-d) Example frames of the top LiDAR signal, represented by RV and BEV respectively. (e) Spatial Misalignment phenomena. (f) Vision Corruption phenomena.
Figure 3: Range Aware Kernel decomposes the range image space into multiple subspaces and overcomes the spatial misalignment issue by extracting independent features from each subspace. ).
Figure 5: Vision Restoration Module. By predefining the recovery angle δ, VRM constructs an extended spherical space with azimuth angles θ ∈ [−δ, 2π δ]. Therefore, the visual corruption problem on both sides of the range image I is solved, significantly simplifying the process of feature extraction from the edges of I.
@inproceedings{bai2023rangeperception,title={RangePerception: Taming Li{DAR} Range View for Efficient and Accurate 3D Object Detection},author={Yeqi BAI and Ben Fei and Youquan Liu and Tao MA and Yuenan Hou and Botian Shi and Yikang LI},booktitle={Thirty-seventh Conference on Neural Information Processing Systems},year={2023},url={https://openreview.net/forum?id=9kFQEJSyCM}}
The above is the detailed content of New Range View3D detection idea: RangePerception. For more information, please follow other related articles on the PHP Chinese website!