search
HomeTechnology peripheralsAISupports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

##Human dancing video generation is a compelling and challenging task Controlled video synthesis task, aiming to generate high-quality realistic continuous videos based on input reference images and target pose sequences. With the rapid development of video generation technology, especially the iterative evolution of generative models, the dancing video generation task has made unprecedented progress and demonstrated a wide range of application potential.

The existing methods can be roughly divided into two groups. The first group is typically based on a Generative Adversarial Network (GAN), which exploits an intermediate pose-guided representation to warp a reference appearance and generate reasonable video frames from previously warped targets. However, methods based on generative adversarial networks often suffer from unstable training and poor generalization capabilities, resulting in obvious artifacts and inter-frame jitter.
The second group uses the
diffusion model (Diffusion model)
to synthesize realistic videos. These methods have the advantages of stable training and strong transfer capabilities, and perform better than GAN-based methods. Typical methods include Disco, MagicAnimate, Animate Anyone, Champ, etc.
Although methods based on diffusion models have made significant progress, existing methods still have two limitations:
First, they require an additional reference network (ReferenceNet) To encode the reference image features and visually align them with the backbone branches of 3D-UNet, which increases the training difficulty and model parameters; second, they usually use temporal Transformer to model the temporal dependence between video frames, but the Transformer The computational relationship between complexity and the length of generated time becomes quadratic, which limits the timing length of generated videos
. Typical methods can only generate 24 frames of video, limiting practical deployment possibilities. Although the sliding window strategy of temporal overlap can generate longer videos, the team authors found that this method easily leads to problems of unsmooth transitions and appearance inconsistency at the overlapped junctions of segments.
In order to solve these problems, a research team from Huazhong University of Science and Technology, Alibaba, and University of Science and Technology of China proposed the
UniAnimate framework to achieve efficient and long-term Human video generation
.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

Paper address: https://arxiv.org/abs/2406.01188
  • Project homepage: https://unianimate.github.io/

Method Introduction
The UniAnimate framework first maps the reference image, pose guidance and noise video into the feature space, and then uses the
Unified Video Diffusion Model (Unified Video Diffusion Model)
to simultaneously process the reference image and video backbone branch table View alignment and video denoising tasks, achieving efficient feature alignment and coherent video generation.
Secondly, the research team also proposed a unified noise input that supports random noise input and conditional noise input based on the first frame. The random noise input can be combined with the reference The image and pose sequence generates a video, while the conditional noise input based on the first frame (First Frame Conditioning) uses the first frame of the video as a conditional input to continue to generate subsequent videos. In this way, inference can be generated by treating the last frame of the previous video segment as the first frame of the next segment, and so on to achieve long video generation in one framework.
Finally, in order to further efficiently process long sequences, the research team explored a time modeling architecture based on the state space model (Mamba) as the original computationally intensive time series Transformer. An alternative. Experiments have found that the architecture based on sequential Mamba can achieve similar effects to the sequential Transformer, but requires less graphics memory overhead.
Through the UniAnimate framework, users can generate high-quality time-series human dancing videos. It is worth mentioning that by using the First Frame Conditioning strategy multiple times, a one-minute high-definition video can be generated. Compared to traditional methods, UniAnimate has the following advantages:

  • No need for additional reference networks: The UniAnimate framework enables unified video The diffusion model eliminates the dependence on additional reference networks and reduces the training difficulty and the number of model parameters.
  • The pose map of the reference image is introduced as an additional reference condition, which promotes the network to learn the correspondence between the reference pose and the target pose, and achieves a good appearance Alignment.
  • Generate long sequence videos within a unified framework: By adding a unified noise input, UniAnimate is able to generate long-term videos within a frame, no longer subject to traditional methods time limit.
  • Highly consistent: The UniAnimate framework ensures the smooth transition effect of the generated video by iteratively using the first frame as a condition to generate subsequent frames, making the video More consistent and coherent in appearance. This strategy also allows users to generate multiple video clips and select the last frame of the clip with good results as the first frame of the next generated clip, making it easier for users to interact with the model and adjust the generation results as needed. However, when generating long videos using the sliding window strategy of previous time series overlap, segment selection cannot be performed because each video is coupled to each other in each step of the diffusion process.

The above characteristics make the UniAnimate framework perform well in synthesizing high-quality, long-term human dancing videos, providing opportunities for a wider range of applications. New possibilities.

Generation result example

1. Generate dancing videos based on synthesized images.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.
2. Generate dancing videos based on real pictures.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

3. Dancing video generation based on clay style pictures.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

4. Musk dances.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

5. Yann LeCun dances.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

#6. Generate dancing videos based on other cross-domain images.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.


7. Generate a one-minute dancing video. Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.
To obtain the original MP4 video and more HD video examples, please refer to the paper’s project homepage https://unianimate.github .io/.

Experimental comparative analysis

1. Compared with existing methods on TikTok Quantitative comparative experiments on data sets.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

As shown in the table above, the UniAnimate method has achieved the best results on image indicators such as L1, PSNR, SSIM, LPIPS and video indicator FVD. It shows that UniAnimate can produce high-fidelity results.

#2. Qualitative comparative experiments with existing methods.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

It can also be seen from the above qualitative comparative experiments that compared to MagicAnimate and Animate Anyone, the UniAnimate method can generate better continuous results without obvious artifacts, demonstrating the effectiveness of UniAnimate.

#3. Peeling experiment.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

It can be seen from the numerical results in the above table that the reference pose and unified video diffusion model used in UniAnimate play a key role in improving performance.

#4. Comparison of long video generation strategies.

Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.

As can be seen from the above figure, the commonly used timing overlap sliding window strategy to generate long videos can easily lead to discontinuous transitions. The research team believes that this is because of different windows. The difficulty of denoising is inconsistent in the overlapping parts of the time series, resulting in different generation results. Direct averaging will lead to obvious deformation or distortion, and this inconsistency will propagate errors. The first frame video continuation generation method used in this article can generate smooth transitions.

For more experimental comparison results and analysis, please refer to the original paper.

All in all, UniAnimate’s sample results and quantitative comparison results are very good. We look forward to UniAnimate’s application in various fields, such as film and television production, virtual reality and game industries, etc., for users Bringing a more realistic and exciting human image animation experience.

The above is the detailed content of Supports the synthesis of one-minute high-definition videos. Huake et al. proposed a new framework for human dancing video generation, UniAnimate.. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
DSA如何弯道超车NVIDIA GPU?DSA如何弯道超车NVIDIA GPU?Sep 20, 2023 pm 06:09 PM

你可能听过以下犀利的观点:1.跟着NVIDIA的技术路线,可能永远也追不上NVIDIA的脚步。2.DSA或许有机会追赶上NVIDIA,但目前的状况是DSA濒临消亡,看不到任何希望另一方面,我们都知道现在大模型正处于风口位置,业界很多人想做大模型芯片,也有很多人想投大模型芯片。但是,大模型芯片的设计关键在哪,大带宽大内存的重要性好像大家都知道,但做出来的芯片跟NVIDIA相比,又有何不同?带着问题,本文尝试给大家一点启发。纯粹以观点为主的文章往往显得形式主义,我们可以通过一个架构的例子来说明Sam

阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型Sep 25, 2023 pm 10:25 PM

2021年9月25日,阿里云发布了开源项目通义千问140亿参数模型Qwen-14B以及其对话模型Qwen-14B-Chat,并且可以免费商用。Qwen-14B在多个权威评测中表现出色,超过了同等规模的模型,甚至有些指标接近Llama2-70B。此前,阿里云还开源了70亿参数模型Qwen-7B,仅一个多月的时间下载量就突破了100万,成为开源社区的热门项目Qwen-14B是一款支持多种语言的高性能开源模型,相比同类模型使用了更多的高质量数据,整体训练数据超过3万亿Token,使得模型具备更强大的推

ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项Oct 04, 2023 pm 09:37 PM

在法国巴黎举行了国际计算机视觉大会ICCV(InternationalConferenceonComputerVision)本周开幕作为全球计算机视觉领域顶级的学术会议,ICCV每两年召开一次。ICCV的热度一直以来都与CVPR不相上下,屡创新高在今天的开幕式上,ICCV官方公布了今年的论文数据:本届ICCV共有8068篇投稿,其中有2160篇被接收,录用率为26.8%,略高于上一届ICCV2021的录用率25.9%在论文主题方面,官方也公布了相关数据:多视角和传感器的3D技术热度最高在今天的开

复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据Sep 29, 2023 pm 01:17 PM

随着智慧司法的兴起,智能化方法驱动的智能法律系统有望惠及不同群体。例如,为法律专业人员减轻文书工作,为普通民众提供法律咨询服务,为法学学生提供学习和考试辅导。由于法律知识的独特性和司法任务的多样性,此前的智慧司法研究方面主要着眼于为特定任务设计自动化算法,难以满足对司法领域提供支撑性服务的需求,离应用落地有不小的距离。而大型语言模型(LLMs)在不同的传统任务上展示出强大的能力,为智能法律系统的进一步发展带来希望。近日,复旦大学数据智能与社会计算实验室(FudanDISC)发布大语言模型驱动的中

百度文心一言全面向全社会开放,率先迈出重要一步百度文心一言全面向全社会开放,率先迈出重要一步Aug 31, 2023 pm 01:33 PM

8月31日,文心一言首次向全社会全面开放。用户可以在应用商店下载“文心一言APP”或登录“文心一言官网”(https://yiyan.baidu.com)进行体验据报道,百度计划推出一系列经过全新重构的AI原生应用,以便让用户充分体验生成式AI的理解、生成、逻辑和记忆等四大核心能力今年3月16日,文心一言开启邀测。作为全球大厂中首个发布的生成式AI产品,文心一言的基础模型文心大模型早在2019年就在国内率先发布,近期升级的文心大模型3.5也持续在十余个国内外权威测评中位居第一。李彦宏表示,当文心

AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验Sep 20, 2023 pm 10:45 PM

保险行业对于社会民生和国民经济的重要性不言而喻。作为风险管理工具,保险为人民群众提供保障和福利,推动经济的稳定和可持续发展。在新的时代背景下,保险行业面临着新的机遇和挑战,需要不断创新和转型,以适应社会需求的变化和经济结构的调整近年来,中国的保险科技蓬勃发展。通过创新的商业模式和先进的技术手段,积极推动保险行业实现数字化和智能化转型。保险科技的目标是提升保险服务的便利性、个性化和智能化水平,以前所未有的速度改变传统保险业的面貌。这一发展趋势为保险行业注入了新的活力,使保险产品更贴近人民群众的实际

致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑Oct 07, 2023 pm 10:09 PM

不得不说,Llama2的「二创」项目越来越硬核、有趣了。自Meta发布开源大模型Llama2以来,围绕着该模型的「二创」项目便多了起来。此前7月,特斯拉前AI总监、重回OpenAI的AndrejKarpathy利用周末时间,做了一个关于Llama2的有趣项目llama2.c,让用户在PyTorch中训练一个babyLlama2模型,然后使用近500行纯C、无任何依赖性的文件进行推理。今天,在Karpathyllama2.c项目的基础上,又有开发者创建了一个启动Llama2的演示操作系统,以及一个

腾讯与中国宋庆龄基金会发布“AI编程第一课”,教育部等四部门联合推荐腾讯与中国宋庆龄基金会发布“AI编程第一课”,教育部等四部门联合推荐Sep 16, 2023 am 09:29 AM

腾讯与中国宋庆龄基金会合作,于9月1日发布了名为“AI编程第一课”的公益项目。该项目旨在为全国零基础的青少年提供AI和编程启蒙平台。只需在微信中搜索“腾讯AI编程第一课”,即可通过官方小程序免费体验该项目由北京师范大学任学术指导单位,邀请全球顶尖高校专家联合参研。“AI编程第一课”首批上线内容结合中国航天、未来交通两项国家重大科技议题,原创趣味探索故事,通过剧本式、“玩中学”的方式,让青少年在1小时的学习实践中认识A

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use