


While 4K image quality and 60-frame video can only be viewed by members on some APPs, AI researchers have achieved 4K level 3D dynamic synthesis video, and the picture is quite smooth.
#In real life, most of the videos we come into contact with are 2D. When watching this kind of video, we have no way to choose the viewing angle, such as walking among the actors or walking to a corner of the space. The emergence of VR and AR devices has made up for this shortcoming. The 3D videos they provide allow us to change our perspective and even move around at will, greatly improving the sense of immersion.
However, the synthesis of this kind of 3D dynamic scene has always been a difficulty, both in terms of image quality and smoothness.
Recently, researchers from Zhejiang University, Xiangyan Technology and Ant Group have challenged this problem. In a paper titled "4K4D: Real-Time 4D View Synthesis at 4K Resolution", they proposed a point cloud representation method called 4K4D, which greatly improves the rendering speed of high-resolution 3D dynamic scene synthesis. Specifically, using an RTX 4090 GPU, their method can render at 4K resolution at a frame rate of up to 80 FPS and at 1080p resolution at a frame rate of up to 400 FPS. Overall, it is more than 30 times faster than the previous method, and the rendering quality reaches SOTA.
The following is an introduction to the paper.
Paper Overview
- ## Paper link: https://arxiv.org/pdf/2310.11448.pdf
- Project link: https://zju3dv.github.io/4k4d/
Dynamic view synthesis aims to reconstruct dynamic 3D scenes from captured video and create immersive virtual replays , which is a long-term research problem in computer vision and computer graphics. Key to the utility of this technology is its ability to render in real-time with high fidelity, allowing it to be used in VR/AR, sports broadcasts, and artistic performance capture. Traditional approaches represent dynamic 3D scenes as sequences of textured meshes and use complex hardware to reconstruct them. Therefore, they are usually restricted to controlled environments.
Recently, implicit neural representations have achieved great success in reconstructing dynamic 3D scenes from RGB videos via differentiable rendering. For example, "Neural 3d video synthesis from multi-view video" models the target scene as a dynamic radiation field, uses volume rendering to synthesize the image, and compares and optimizes it with the input image. Despite the impressive dynamic view synthesis results, existing methods often take seconds or even minutes to render an image at 1080p resolution due to expensive network evaluation.
Inspired by static view synthesis methods, some dynamic view synthesis methods improve rendering speed by reducing the cost or number of network evaluations. Through these strategies, MLP Maps is able to render foreground dynamic figures at 41.7 fps. However, rendering speed challenges remain, as the real-time performance of MLP Maps is only achievable when compositing images of moderate resolution (384×512). When rendering a 4K resolution image, it slowed down to just 1.3 FPS.
In this paper, researchers propose a new neural representation - 4K4D, for modeling and rendering dynamic 3D scenes. As shown in Figure 1, 4K4D significantly outperforms previous dynamic view synthesis methods in rendering speed while being competitive in rendering quality.
The authors stated that their core innovation lies in 4D point cloud representation and hybrid appearance model. Specifically, for dynamic scenes, they use a space carving algorithm to obtain a coarse point cloud sequence and model the position of each point as a learnable vector. They also introduced a 4D feature grid to assign feature vectors to each point and fed it into the MLP network to predict the radius, density, and spherical harmonics (SH) coefficients of the points. 4D feature meshes naturally apply spatial regularization on the point cloud, making the optimization more robust. Based on 4K4D, researchers developed a differentiable depth peeling algorithm that uses hardware rasterization to achieve unprecedented rendering speeds.
Researchers found that the SH model based on MLP is difficult to represent the appearance of dynamic scenes. To alleviate this problem, they also introduced an image mixture model to be combined with the SH model to represent the appearance of the scene. An important design is that they make the image blending network independent of viewing direction, so it can be pre-computed after training to improve rendering speed. As a double-edged sword, this strategy makes the image mixture model discrete along the viewing direction. This problem can be remedied using a continuous SH model. Compared with 3D Gaussian Splatting that only uses SH models, the hybrid appearance model proposed by the researchers fully utilizes the information captured by the input image, thereby effectively improving the rendering quality.
To verify the effectiveness of the new method, the researchers evaluated 4K4D on multiple widely used multi-view dynamic new view synthesis datasets, including NHR, ENeRF-Outdoo, DNA- Rendering and Neural3DV. Extensive experiments have shown that 4K4D is not only orders of magnitude faster in rendering speed, but also significantly better than SOTA technology in terms of rendering quality. Using an RTX 4090 GPU, the new method achieves 400 FPS on the DNA-Rendering dataset at 1080p resolution and 80 FPS on the ENeRF-Outdoor dataset at 4k resolution.
Method Introduction
Given a multi-view video capturing a dynamic 3D scene, this paper aims to reconstruct the target scene and perform view synthesis in real-time. The model architecture diagram is shown in Figure 2:
Then the article introduces the relevant knowledge of using point clouds to model dynamic scenes. They start from 4D embedding, The geometric model and appearance model are expanded from other angles.
4D Embedding: Given a coarse point cloud of a target scene, this paper uses neural networks and feature meshes to represent its dynamic geometry and appearance. Specifically, this article first defines six feature planes θ_xy, θ_xz, θ_yz, θ_tx, θ_ty and θ_tz, and uses the K-Planes strategy to use these six planes to model a 4D feature field Θ(x, t):
Geometric model: Based on coarse point clouds, the dynamic scene geometry is constructed by learning three attributes on each point ( entries), that is, position p ∈ R^3, radius r ∈ R and density σ ∈ R. Then with the help of these points, the volume density of the point x in space is calculated. The point position p is modeled as an optimizable vector. The radius r and density σ are predicted by feeding the feature vector f in Eq.(1) into the MLP network.
Appearance model: As shown in Figure 2c, this article uses image blending technology and spherical harmonic function (SH) model to build a hybrid appearance model, where the image blending technology represents the discrete view appearance c_ibr, The SH model represents a continuous view-dependent appearance c_sh. For the point x at the t-th frame, its color in the view direction d is:
differentiable depth peeling
#The dynamic scene representation proposed in this article can be rendered into an image with the help of depth peeling algorithm.
The researchers developed a custom shader to implement the depth peeling algorithm consisting of K rendering passes. That is, for a specific pixel u, the researcher performed multi-step processing. Finally, after K renderings, the pixel u obtained a set of sorting points {x_k|k = 1, ..., K}.
Based on these points {x_k|k = 1, ..., K}, the color of pixel u in volume rendering is expressed as:
During the training process, given the rendered pixel color C (u), this paper compares it with the real pixel color C_gt (u) and uses the following loss function to optimize the model in an end-to-end manner:
In addition, this article also applies perceptual loss:
And mask loss:
The final loss function is defined as:
Experiments and results
This paper evaluates the 4K4D method on DNA-Rendering, ENeRF-Outdoor, NHR and Neural3DV data sets.
The results on the DNA-Rendering data set are shown in Table 1. The results show that the 4K4D rendering speed is more than 30 times faster than ENeRF with SOTA performance, and the rendering quality is better .
Qualitative results on the DNA-Rendering dataset are shown in Figure 5. KPlanes cannot handle the detailed appearance and geometry of 4D dynamic scenes. restoration, whereas other image-based methods produce high-quality appearances. However, these methods tend to produce blurry results around occlusions and edges, resulting in reduced visual quality, whereas 4K4D can produce higher-fidelity renderings at over 200 FPS.
Next, experiments show the qualitative and quantitative results of different methods on the ENeRFOutdoor dataset. As shown in Table 2, 4K4D still achieved significantly better results when rendering at over 140 FPS.
While other methods, such as ENeRF, produce blurry results; IBRNet's rendering results contain black artifacts around the edges of the image, as shown in Figure 3 display; K-Planse cannot reconstruct dynamic human bodies and different background areas.
Table 6 demonstrates the effectiveness of the differentiable depth peeling algorithm, with 4K4D being more than 7 times faster than CUDA-based methods.
This article also reports 4K4D rendering speeds on different hardware (RTX 3060, 3090, and 4090) at different resolutions in Table 7.
Please see the original paper for more details.
The above is the detailed content of 4K quality 3D composite video no longer freezes in slideshows, and the new method increases rendering speed by more than 30 times. For more information, please follow other related articles on the PHP Chinese website!

1 前言在发布DALL·E的15个月后,OpenAI在今年春天带了续作DALL·E 2,以其更加惊艳的效果和丰富的可玩性迅速占领了各大AI社区的头条。近年来,随着生成对抗网络(GAN)、变分自编码器(VAE)、扩散模型(Diffusion models)的出现,深度学习已向世人展现其强大的图像生成能力;加上GPT-3、BERT等NLP模型的成功,人类正逐步打破文本和图像的信息界限。在DALL·E 2中,只需输入简单的文本(prompt),它就可以生成多张1024*1024的高清图像。这些图像甚至

“Making large models smaller”这是很多语言模型研究人员的学术追求,针对大模型昂贵的环境和训练成本,陈丹琦在智源大会青源学术年会上做了题为“Making large models smaller”的特邀报告。报告中重点提及了基于记忆增强的TRIME算法和基于粗细粒度联合剪枝和逐层蒸馏的CofiPruning算法。前者能够在不改变模型结构的基础上兼顾语言模型困惑度和检索速度方面的优势;而后者可以在保证下游任务准确度的同时实现更快的处理速度,具有更小的模型结构。陈丹琦 普

Wav2vec 2.0 [1],HuBERT [2] 和 WavLM [3] 等语音预训练模型,通过在多达上万小时的无标注语音数据(如 Libri-light )上的自监督学习,显著提升了自动语音识别(Automatic Speech Recognition, ASR),语音合成(Text-to-speech, TTS)和语音转换(Voice Conversation,VC)等语音下游任务的性能。然而这些模型都没有公开的中文版本,不便于应用在中文语音研究场景。 WenetSpeech [4] 是

由于复杂的注意力机制和模型设计,大多数现有的视觉 Transformer(ViT)在现实的工业部署场景中不能像卷积神经网络(CNN)那样高效地执行。这就带来了一个问题:视觉神经网络能否像 CNN 一样快速推断并像 ViT 一样强大?近期一些工作试图设计 CNN-Transformer 混合架构来解决这个问题,但这些工作的整体性能远不能令人满意。基于此,来自字节跳动的研究者提出了一种能在现实工业场景中有效部署的下一代视觉 Transformer——Next-ViT。从延迟 / 准确性权衡的角度看,

3月27号,Stability AI的创始人兼首席执行官Emad Mostaque在一条推文中宣布,Stable Diffusion XL 现已可用于公开测试。以下是一些事项:“XL”不是这个新的AI模型的官方名称。一旦发布稳定性AI公司的官方公告,名称将会更改。与先前版本相比,图像质量有所提高与先前版本相比,图像生成速度大大加快。示例图像让我们看看新旧AI模型在结果上的差异。Prompt: Luxury sports car with aerodynamic curves, shot in a

人工智能就是一个「拼财力」的行业,如果没有高性能计算设备,别说开发基础模型,就连微调模型都做不到。但如果只靠拼硬件,单靠当前计算性能的发展速度,迟早有一天无法满足日益膨胀的需求,所以还需要配套的软件来协调统筹计算能力,这时候就需要用到「智能计算」技术。最近,来自之江实验室、中国工程院、国防科技大学、浙江大学等多达十二个国内外研究机构共同发表了一篇论文,首次对智能计算领域进行了全面的调研,涵盖了理论基础、智能与计算的技术融合、重要应用、挑战和未来前景。论文链接:https://spj.scien

译者 | 李睿审校 | 孙淑娟近年来, Transformer 机器学习模型已经成为深度学习和深度神经网络技术进步的主要亮点之一。它主要用于自然语言处理中的高级应用。谷歌正在使用它来增强其搜索引擎结果。OpenAI 使用 Transformer 创建了著名的 GPT-2和 GPT-3模型。自从2017年首次亮相以来,Transformer 架构不断发展并扩展到多种不同的变体,从语言任务扩展到其他领域。它们已被用于时间序列预测。它们是 DeepMind 的蛋白质结构预测模型 AlphaFold

说起2010年南非世界杯的最大网红,一定非「章鱼保罗」莫属!这只位于德国海洋生物中心的神奇章鱼,不仅成功预测了德国队全部七场比赛的结果,还顺利地选出了最终的总冠军西班牙队。不幸的是,保罗已经永远地离开了我们,但它的「遗产」却在人们预测足球比赛结果的尝试中持续存在。在艾伦图灵研究所(The Alan Turing Institute),随着2022年卡塔尔世界杯的持续进行,三位研究员Nick Barlow、Jack Roberts和Ryan Chan决定用一种AI算法预测今年的冠军归属。预测模型图


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

SublimeText3 Mac version
God-level code editing software (SublimeText3)

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment
