Home  >  Article  >  Technology peripherals  >  LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\'24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\'24)

WBOY
WBOYforward
2024-05-09 13:31:37819browse

Light-realistic simulation plays a key role in applications such as autonomous driving, where advances in neural network radiated fields (NeRFs) may enable better scalability by automatically creating digital 3D assets. However, the reconstruction quality of street scenes suffers due to the high collinearity of camera motion on the streets and sparse sampling at high speeds. On the other hand, the application often requires rendering from a camera perspective that deviates from the input perspective to accurately simulate behaviors such as lane changes. LidaRF presents several insights that allow better utilization of lidar data to improve the quality of NeRF in street views. First, the framework learns geometric scene representations from LiDAR data, which are combined with an implicit mesh-based decoder to provide stronger geometric information provided by the displayed point cloud. Secondly, a robust occlusion-aware depth supervised training strategy is proposed, allowing to improve the NeRF reconstruction quality in street scenes by accumulating strong information using dense LiDAR point clouds. Third, enhanced training perspectives are generated based on the intensity of lidar points to further improve upon the significant improvements obtained in new perspective synthesis under real driving scenarios. In this way, with a more accurate geometric scene representation learned by the framework from lidar data, the method can be improved in one step and obtain better significant improvements in real driving scenarios.

The contribution of LidaRF is mainly reflected in three aspects:

(i) Mixing lidar encoding and grid features to enhance scene representation. While lidar has been used as a natural depth monitoring source, incorporating lidar into NeRF inputs offers great potential for geometric induction, but is not straightforward to implement. To this end, a grid-based representation is borrowed, but features learned from point clouds are fused into the grid to inherit the advantages of explicit point cloud representations. Through the successful launch of the 3D sensing framework, 3D sparse convolutional networks are utilized as an effective and efficient structure to extract geometric features from the local and global context of lidar point clouds.

(ii) Robust occlusion-aware depth supervision. Similar to existing work, lidar is also used here as a source of deep supervision, but in greater depth. Since the sparsity of lidar points limits its effectiveness, especially in low-texture areas, denser depth maps are generated by aggregating lidar points across neighboring frames. However, the depth map thus obtained does not take occlusions into account, resulting in erroneous depth supervision. Therefore, a robust depth supervision scheme is proposed, borrowing the method of class learning - gradually supervising the depth from the near field to the far field, and gradually filtering out the wrong depth during the NeRF training process, so as to more effectively extract the depth from the lidar. Learn depth.

(iii) Lidar-based view enhancement. Furthermore, given the view sparsity and limited coverage in driving scenarios, lidar is utilized to densify the training views. That is, the accumulated lidar points are projected into new training views; note that these views may deviate somewhat from the driving trajectory. These views projected from lidar are added to the training dataset, and they do not account for occlusion issues. However, we apply the previously mentioned supervision scheme to solve the occlusion problem, thus improving the performance. Although our method is also applicable to general scenes, in this work we focus more on the evaluation of street scenes and achieve significant improvements compared to existing techniques, both quantitatively and qualitatively.

LidaRF has also shown advantages in interesting applications that require greater deviation from the input view, significantly improving the quality of NeRF in challenging street scene applications.

LidaRF overall framework overview

LidaRF is a method for inputting and outputting corresponding densities and colors. It uses UNet to combine Huff coding and laser Radar encoding. Furthermore, enhanced training data are generated via lidar projections to train geometric predictions using the proposed robust deep supervision scheme.

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

#1) Hybrid representation of lidar encoding

Lidar point clouds have strong geometric guidance potential, which is important for NeRF (Neural Rendering Field) is extremely valuable. However, relying solely on lidar features for scene representation results in low-resolution rendering due to the sparse nature of lidar points (despite temporal accumulation). Additionally, because lidar has a limited field of view, for example it cannot capture building surfaces above a certain height, blank renderings occur in these areas. In contrast, our framework fuses lidar features and high-resolution spatial grid features to exploit the advantages of both and learn together to achieve high-quality and complete scene rendering.

Lidar feature extraction. The geometric feature extraction process for each lidar point is described in detail here. Referring to Figure 2, the lidar point clouds of all frames of the entire sequence are first aggregated to build a denser point cloud collection. The point cloud is then voxelized into a voxel grid, where the spatial positions of the points within each voxel unit are averaged to generate a 3D feature for each voxel unit. Inspired by the widespread success of 3D perception frameworks, scene geometry features are encoded using 3D sparse UNet on a voxel grid, which allows learning from the global context of scene geometry. 3D sparse UNet takes a voxel grid and its 3-dimensional features as input and outputs neural volumetric features. Each occupied voxel is composed of n-dimensional features.

Lidar feature query. For each sample point x along the ray to be rendered, if there are at least K nearby lidar points within the search radius R, its lidar features are queried; otherwise, its lidar features are set to null (i.e. all zeros). Specifically, the Fixed Radius Nearest Neighbor (FRNN) method is used to search for the K nearest lidar point index set related to x, denoted as . Different from the method in [9] that predetermines the ray sampling points before starting the training process, our method is real-time when performing the FRNN search, because as the NeRF training converges, the sample point distribution from the region network will dynamically tend to Focus on the surface. Following the Point-NeRF approach, our method utilizes a multilayer perceptron (MLP) F to map the lidar features of each point into a neural scene description. For the i-th neighboring point of x, F takes the lidar features and relative position as input and outputs the neural scene description as: LiDAR encoding

ϕ

, using the standard inverse distance weighting method to aggregate the neural scene description of its K neighboring pointsLidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

feature fusion for radiative decoding. The lidar code ϕL is concatenated with the hash code ϕh, and a multi-layer perceptron Fα is applied to predict the density α and density embedding h of each sample. Finally, through another multi-layer perceptron Fc, the corresponding color c is predicted based on the spherical harmonic encoding SH and density embedding h in the viewing direction d.

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

#2) Robust depth supervision

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

In addition to feature encoding, lidar points are also derived from Get deep supervision from them. However, due to the sparsity of lidar points, the resulting benefits are limited and insufficient to reconstruct low-texture areas such as pavement. Here, we propose to accumulate adjacent lidar frames to increase density. Although 3D points are able to accurately capture scene structure, occlusion between points needs to be considered when projecting them onto the image plane for depth supervision. Occlusions result from increased displacement between the camera and the lidar and its adjacent frames, resulting in false depth supervision, as shown in Figure 3. Due to the sparse nature of lidar even after accumulation, dealing with this problem is very difficult, making fundamental graphics techniques such as z-buffering unapplicable. In this work, a robust supervision scheme is proposed to automatically filter out spurious deep supervision when training NeRF.

A robust supervision scheme for occlusion awareness. This paper designs a class training strategy so that the model is initially trained using closer and more reliable depth data that is less susceptible to occlusion. As training progresses, the model gradually begins to incorporate further depth data. At the same time, the model also has the ability to discard deep supervision that is unusually far away from its predictions.

Recall that due to the forward motion of the vehicle camera, the training images it produces are sparse and have limited field of view coverage, which poses challenges to NeRF reconstruction, especially when the new view deviates from the vehicle trajectory. Here, we propose to leverage LiDAR to augment training data. First, we color each lidar frame's point cloud by projecting it onto its synchronized camera and interpolating the RGB values. The colored point cloud is accumulated and projected onto a set of synthetically enhanced views, producing the synthetic image and depth map shown in Figure 2. LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

Experimental comparative analysis

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

The above is the detailed content of LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\'24). For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete