Home  >  Article  >  Technology peripherals  >  CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering

CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering

WBOY
WBOYforward
2024-04-22 14:37:01869browse

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com.

In daily activities, people's movements often cause secondary motion of clothes (secondary motion of clothes) and thus produce different folds of clothes, and This requires simultaneous dynamic modeling of the geometry, movement (human posture and velocity dynamics, etc.) and appearance of the human body and clothing. Since this process involves complex non-rigid physical interactions between people and clothes, traditional three-dimensional representation is often difficult to handle.

Learning dynamic digital human rendering from video sequences has made great progress in recent years. Existing methods often regard rendering as a neural mapping from human posture to image. The paradigm of "motion encoder-motion feature-appearance decoder" is adopted. This paradigm is based on image loss for supervision. It focuses too much on the reconstruction of each frame of image and lacks modeling of motion continuity. Therefore, it is difficult to effectively model complex motions such as "human body motion and clothing-affiliated motion".

In order to solve this problem, the S-Lab team from Nanyang Technological University in Singapore proposed a new paradigm of dynamic human reconstruction of motion-appearance joint learning, and A surface-based triplane is proposed, which unifies motion physics modeling and appearance modeling in one framework, opening up new ideas for improving the quality of dynamic human rendering. This new paradigm effectively models clothing-attached motion and can be used to learn dynamic human body reconstruction from fast-moving videos (such as dancing) and render motion-related shadows. The rendering efficiency is 9 times faster than the 3D voxel rendering method, and the LPIPS image quality is improved by about 19 percentage points.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

  • Paper title: SurMo: Surface-based 4D Motion Modeling for Dynamic Human Rendering
  • Paper address : https://arxiv.org/pdf/2404.01225.pdf
  • Project homepage: https://taohuumd.github.io/projects/SurMo
  • Github link: https://github.com/TaoHuUMD/SurMo
CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式
##Method Overview

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

In view of the shortcomings of the existing paradigm "Motion Encoder-Motion Features-Appearance Decoder" which only focuses on appearance reconstruction and ignores motion continuity modeling, a new paradigm SurMo is proposed: "①Motion Encoder-Motion Features—— ②Motion decoder, ③Appearance decoder". As shown in the figure above, the paradigm is divided into three stages:

  • Different from existing methods that model motion in sparse three-dimensional space, SurMo proposes Four-dimensional (XYZ-T) motion modeling based on the human body surface manifold field (or compact two-dimensional texture UV space), and through three planes (surface -based triplane) to represent motion.
  • Propose a motion physics decoder to predict the motion state of the next frame based on the current motion features (such as three-dimensional posture, speed, motion trajectory, etc.), such as the spatial partial derivative of motion—surface normal vector and time derivative-velocity to model the continuity of motion characteristics.
  • Four-dimensional appearance decoding, decoding motion features in time series to render three-dimensional free-viewpoint video, mainly realized through hybrid volumetric-texture neural rendering (Hybrid Volumetric-Textural Rendering, HVTR [Hu et al. 2022]).

SurMo can learn dynamic human rendering from videos based on end-to-end training based on reconstruction loss and adversarial loss .

Experimental results

This study is conducted in 3 data sets, with a total of 9 dynamics Experimental evaluations were conducted on human video sequences: ZJU-MoCap [Peng et al. 2021], AIST [Li, Yang et al. 2021] MPII-RRDC [Habermann et al. 2021] .

New viewpoint time series rendering

##This study explores the new viewpoint on the ZJU-MoCap data set Next, we studied the dynamic rendering effect of a time sequence (time-varying appearances), especially 2 sequences, as shown in the figure below. Each sequence contains similar gestures but appear in different motion trajectories, such as ①②, ③④, ⑤⑥. SurMo can model motion trajectories and therefore generate dynamic effects that change over time, while related methods generate results that only depend on posture, with the folds of clothes being almost the same under different trajectories.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

Rendering motion-related shadows and clothing attached motion

SurMo at MPII-RRDC The data set explores motion-related shadows and clothing-affiliated movements, as shown in the figure below. The sequence was shot on an indoor soundstage, and the lighting conditions produced motion-related shadows on the performers due to self-occlusion issues.


SurMo can restore these shadows under new viewpoint rendering, such as ①②, ③④, ⑦⑧. The contrasting method HumanNeRF [Weng et al.] is unable to recover motion-related shadows. In addition, SurMo can reconstruct the motion of clothing accessories that changes with the motion trajectory, such as different folds in jumping movements ⑤⑥, while HumanNeRF cannot reconstruct this dynamic effect.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

Rendering a fast moving human body

SurMo Also from Render the human body in fast-moving videos and recover the movement-related details of clothing folds that contrasting methods cannot render.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

Ablation experiment

(1) Human body surface movement Modeling

This study compared two different motion modeling methods: the currently commonly used motion modeling in voxel space (Volumetric space), and the motion modeling proposed by SurMo In the motion modeling of the human body surface manifold field (Surface manifold), Volumetric triplane and Surface-based triplane are specifically compared, as shown in the figure below.

It can be found that Volumetric triplane is a sparse expression, with only about 21-35% of the features used for rendering, while the Surface-based triplane feature utilization rate can reach 85%, so it has more advantages in handling self-occlusion. As shown in (d). At the same time, Surface-based triplane can achieve faster rendering by filtering points far away from the surface in voxel rendering, as shown in Figure (c).

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

At the same time, this study demonstrates that Surface-based triplane can converge faster than Volumetric triplane in the training process, and has obvious advantages in clothing fold details and self-occlusion, as shown in the figure above Show.

(2) Dynamic learning

SurMo studied the effect of motion modeling through ablation experiments, as shown below shown. The results show that SurMo can decouple the static characteristics of motion (such as fixed posture at a certain frame) and dynamic characteristics (such as speed). For example, when the speed is changed, the folds of close-fitting clothes remain unchanged, such as ①, while the folds of loose clothes are greatly affected by speed, such as ②, which is consistent with daily observations.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

The above is the detailed content of CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete