Home > Article > Technology peripherals > Achieving efficient and realistic ultra-large-scale city rendering: combining NeRF and feature grid technology
Neural Radiation Field (NeRF) purely based on MLP often suffers from under-fitting in large-scale scene blur rendering due to limited model capacity. Recently, some researchers have proposed to geographically divide the scene and use multiple sub-NeRFs to model each area separately. However, the problem caused by this is that as the scene gradually expands, the training cost becomes linear with the number of sub-NeRFs. expand.
Another solution is to use voxel feature grid representation, which is computationally efficient and scales naturally to large scenes with increasing grid resolution. However, feature meshes often only achieve suboptimal solutions due to fewer constraints, producing some noise artifacts in rendering, especially in areas with complex geometry and textures.
In this article, researchers from the Chinese University of Hong Kong, Shanghai Artificial Intelligence Laboratory and other institutions propose a new framework to achieve high-fidelity rendering of urban (Ubran) scenes. Taking into account computational efficiency at the same time, it was selected for CVPR 2023. This study uses a compact multi-resolution ground feature plane representation to roughly capture the scene and supplements it with position-encoded inputs through a NeRF branch network for rendering in a jointly learned manner. This approach integrates the advantages of the two approaches: under the guidance of the feature grid representation, lightly weighted NeRF is enough to present a realistic new perspective with details; the jointly optimized ground feature plane can be further refined to form a more accurate and more detailed Compact feature space, output more natural rendering results.
The picture below is an example result of the research method on the real-world Ubran scene, giving people an immersive urban roaming experience:
Method IntroductionIn order to effectively utilize implicit neural representation to reconstruct large urban scenes, this study proposes a dual-branch model architecture that adopts a unified scene representation and integrates explicit voxel grid-based and implicit-based NeRF methods, these two types of representations can complement each other.
The target scene is first modeled using feature meshes in the pre-training stage to roughly capture the geometry and appearance of the scene. A coarse feature grid is then used to 1) guide NeRF point sampling so that it is concentrated around the scene surface and 2) provide NeRF's positional encoding with additional features about the scene geometry and appearance at the sampled locations. With such guidance, NeRF can efficiently acquire finer details in a greatly compressed sampling space. Furthermore, since coarse-level geometry and appearance information are explicitly provided to NeRF, a lightweight MLP is sufficient to learn the mapping from global coordinates to volume density and color values. In a second joint learning stage, the coarse feature mesh is further optimized via gradients from the NeRF branch and normalized, resulting in more accurate and natural rendering results when applied alone.
The core of this research is a new dual-branch structure, namely the Grid branch and the NeRF branch. 1) The researchers first captured the pyramid scene of the feature plane in the pre-training stage, and roughly sampled the ray points through a shallow MLP renderer (grid branch) and predicted their radiance values by volume-integrated MSE on pixel color Loss supervision. This step generates an information-rich set of multi-resolution density/appearance feature planes. 2) Next, the researchers enter the joint learning stage and perform more refined sampling. The researchers used the learned feature grid to guide NeRF branch sampling to focus on scene surfaces. The grid characteristics of the sampling points are derived through bilinear interpolation on the feature plane. These features are then concatenated with position encoding and fed into the NeRF branch to predict volumetric density and color. Note that during joint training, the output of the grid branch is still supervised using ground truth images as well as fine rendering results from the NeRF branch.
Target scenario: In this work, the study uses a novel grid-guided neural radiation field to perform large urban Scene rendering. The left side of the image below shows an example of a large urban scene spanning a 2.7km^2 ground area captured by over 5k drone images. Studies have shown that NeRF-based methods render results that are blurry and over-smoothed and have limited model capacity, while eigengrid-based methods tend to show noisy artifacts when adapting to large-scale scenes with high-resolution eigengrids. The dual-branch model proposed in this study combines the advantages of both methods and achieves realistic novel view rendering through significant improvements over existing methods. Both branches obtain significant enhancements over their respective baselines.
The researchers report the performance of the baseline and the researchers’ methods for comparison. Both qualitatively and quantitatively. Significant improvements can be observed in terms of visual quality and all metrics. The researchers' approach revealed sharper geometries and finer details than purely MLP-based methods (NeRF and Mega-NeRF). In particular, due to NeRF's limited capacity and spectral bias, it is always unable to simulate rapid changes in geometry and color, such as vegetation and stripes on a playground. Although geographically dividing the scene into small regions as shown in the Mega-NeRF baseline helps slightly, the presented results still appear too smooth. On the contrary, guided by the learned feature grid, the sampling space of NeRF is effectively and greatly compressed near the scene surface. Density and appearance features sampled from the ground feature plane explicitly represent scene content, as shown in Figure 3. Although less accurate, it already provides informative local geometry and texture and encourages NeRF's positional encoding to gather missing scene details.
Table 1 below shows the quantitative results:
Figure 6 A rapid improvement in rendering fidelity can be observed:
# #For more information, please refer to the original paper.
The above is the detailed content of Achieving efficient and realistic ultra-large-scale city rendering: combining NeRF and feature grid technology. For more information, please follow other related articles on the PHP Chinese website!