search
HomeTechnology peripheralsAILet Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

With Alibaba’s EMO, it has become easier to “move, speak or sing” with AI-generated or real images.

Recently, the Vincent video model represented by OpenAI Sora has become popular again.

In addition to text-based video generation, human-centered video synthesis has always attracted much attention. For example, focus on “speaker head” video generation, where the goal is to generate facial expressions based on user-provided audio clips.

On a technical level, generating expressions requires accurately capturing the subtle and diverse facial movements of the speaker, which is a huge challenge for similar video synthesis tasks.

Traditional methods usually impose some limitations to simplify the video generation task. For example, some methods utilize 3D models to constrain facial key points, while others extract head motion sequences from raw videos to guide overall motion. While these limitations reduce the complexity of video generation, they also limit the richness and naturalness of the final facial expressions.

In a recent paper published by Ali Intelligent Computing Research Institute, researchers focused on exploring the subtle connection between audio cues and facial movements to improve the authenticity, naturalness and accuracy of the speaker’s head video. expressiveness.

Researchers have found that traditional methods often fail to adequately capture the facial expressions and unique styles of different speakers. Therefore, they proposed the EMO (Emote Portrait Alive) framework, which directly renders facial expressions through an audio-video synthesis method without using intermediate 3D models or facial landmarks.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

  • Paper title: EMO: Emote Portrait Alive- Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions

  • Paper address: https://arxiv.org/pdf/2402.17485.pdf

  • Project homepage: https://humanaigc.github.io/emote-portrait-alive/

In terms of effect, Alibaba’s method can ensure seamless frame transitions throughout the video and maintain identity consistency, thereby producing expressive and more realistic character avatar videos that are more expressive in performance. Significantly better than the current SOTA method in terms of power and realism.

For example, EMO can make the Tokyo girl character generated by Sora sing. The song is "Don't Start Now" sung by the British/Albanian dual-national female singer Dua Lipa. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectlyEMO supports songs in different languages ​​including English and Chinese. It can intuitively identify the tonal changes of the audio and generate dynamic and expressive AI character avatars. For example, let the young lady generated by the AI ​​painting model ChilloutMix sing "Melody" by Tao Zhe. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

#EMO can also allow the avatar to keep up with fast-paced Rap songs, such as asking DiCaprio to perform a section of "Godzilla" by American rapper Eminem. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectlyOf course, EMO not only allows characters to sing, but also supports spoken audio in various languages, turning different styles of portraits, paintings, as well as 3D models and AI-generated content into lifelike animated videos . Such as Audrey Hepburn's talk. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Finally, EMO can also achieve linkage between different characters, such as Gao Qiqiang linking up with Teacher Luo Xiang in "Cyclone". Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Method Overview

Given a single reference image of a character's portrait, our method can generate a video that is synchronized with the input speech audio clip, while also retaining the character's very natural head movements and vivid expressions, and matching the pitch of the provided voice audio. Coordinate changes. By creating a seamless series of cascading videos, the model helps generate long videos of talking portraits with consistent identity and coherent motion, which are critical for real-world applications.

Network Pipeline

The overview of the method is shown in the figure below. The backbone network receives multiple frames of noise potential input and attempts to denoise them into consecutive video frames at each time step. The backbone network has a similar UNet structure configuration to the original SD 1.5 version, specifically

  1. Similar to previous work, in order to ensure continuity between generated frames, the backbone network embeds a temporal module.

  2. In order to maintain the ID consistency of the portraits in the generated frames, the researchers deployed a UNet structure parallel to the backbone network, called ReferenceNet, which inputs the reference image to obtain the reference features.

  3. In order to drive the movement of the character when speaking, the researchers used an audio layer to encode the sound characteristics.

  4. In order to make the speaking character's movements controllable and stable, the researchers used face locators and velocity layers to provide weak conditions.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

For the backbone network, the researchers did not use hint embedding, so they adjusted the cross-attention layer in the SD 1.5 UNet structure to the reference attention force layer. These modified layers will take reference features obtained from ReferenceNet as input instead of text embeddings.

Training strategy

The training process is divided into three stages:

The first stage is image pre-training, in which the backbone network, ReferenceNet and facial positioning The network is incorporated into the training process, where the backbone network takes a single frame as input, while the ReferenceNet processes different, randomly selected frames from the same video clip. Both Backbone and ReferenceNet initialize weights from raw SD.

In the second stage, the researchers introduced video training, added a temporal module and an audio layer, and sampled n f consecutive frames from the video clip, of which the first n frames were motion frames. The time module initializes the weights from AnimateDiff.

The last stage integrates the speed layer, and the researcher only trains the time module and speed layer in this stage. This approach is done to intentionally ignore the audio layer during training. Because the frequency of the speaker's expression, mouth movement, and head movement is mainly affected by the audio. Therefore, there appears to be a correlation between these elements, and the model may drive the character's movement based on velocity signals rather than audio. Experimental results show that training the speed layer and the audio layer simultaneously weakens the ability of audio to drive character movement.

Experimental results

The methods involved in the comparison during the experiment include Wav2Lip, SadTalker, and DreamTalk.

Figure 3 shows the comparison results of this method with previous methods. It can be observed that when provided with a single reference image as input, Wav2Lip typically synthesizes a blurred mouth region and generates videos characterized by static head poses and minimal eye movements. In the case of DreamTalk, the results can distort the original face and also limit the range of facial expressions and head movements. Compared with SadTalker and DreamTalk, the method proposed in this study is able to generate a larger range of head movements and more vivid facial expressions.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

The study further explores avatar video generation in various portrait styles, such as realistic, anime, and 3D. The characters were animated using the same vocal audio input, and the results showed that the resulting videos produced roughly consistent lip sync across the different styles.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Figure 5 shows that our method can generate richer facial expressions and actions when processing audio with obvious tonal characteristics. For example, in the third line of the picture below, a high pitch will trigger a stronger, more vivid expression in the character. Additionally, motion frames allow you to extend the generated video, i.e. generate a longer duration video based on the length of the input audio. As shown in Figures 5 and 6, our method preserves the character's identity in extended sequences even during large movements.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Table 1 The results show that this method has significant advantages in video quality assessment:

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

The above is the detailed content of Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
DSA如何弯道超车NVIDIA GPU?DSA如何弯道超车NVIDIA GPU?Sep 20, 2023 pm 06:09 PM

你可能听过以下犀利的观点:1.跟着NVIDIA的技术路线,可能永远也追不上NVIDIA的脚步。2.DSA或许有机会追赶上NVIDIA,但目前的状况是DSA濒临消亡,看不到任何希望另一方面,我们都知道现在大模型正处于风口位置,业界很多人想做大模型芯片,也有很多人想投大模型芯片。但是,大模型芯片的设计关键在哪,大带宽大内存的重要性好像大家都知道,但做出来的芯片跟NVIDIA相比,又有何不同?带着问题,本文尝试给大家一点启发。纯粹以观点为主的文章往往显得形式主义,我们可以通过一个架构的例子来说明Sam

阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型Sep 25, 2023 pm 10:25 PM

2021年9月25日,阿里云发布了开源项目通义千问140亿参数模型Qwen-14B以及其对话模型Qwen-14B-Chat,并且可以免费商用。Qwen-14B在多个权威评测中表现出色,超过了同等规模的模型,甚至有些指标接近Llama2-70B。此前,阿里云还开源了70亿参数模型Qwen-7B,仅一个多月的时间下载量就突破了100万,成为开源社区的热门项目Qwen-14B是一款支持多种语言的高性能开源模型,相比同类模型使用了更多的高质量数据,整体训练数据超过3万亿Token,使得模型具备更强大的推

ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项Oct 04, 2023 pm 09:37 PM

在法国巴黎举行了国际计算机视觉大会ICCV(InternationalConferenceonComputerVision)本周开幕作为全球计算机视觉领域顶级的学术会议,ICCV每两年召开一次。ICCV的热度一直以来都与CVPR不相上下,屡创新高在今天的开幕式上,ICCV官方公布了今年的论文数据:本届ICCV共有8068篇投稿,其中有2160篇被接收,录用率为26.8%,略高于上一届ICCV2021的录用率25.9%在论文主题方面,官方也公布了相关数据:多视角和传感器的3D技术热度最高在今天的开

复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据Sep 29, 2023 pm 01:17 PM

随着智慧司法的兴起,智能化方法驱动的智能法律系统有望惠及不同群体。例如,为法律专业人员减轻文书工作,为普通民众提供法律咨询服务,为法学学生提供学习和考试辅导。由于法律知识的独特性和司法任务的多样性,此前的智慧司法研究方面主要着眼于为特定任务设计自动化算法,难以满足对司法领域提供支撑性服务的需求,离应用落地有不小的距离。而大型语言模型(LLMs)在不同的传统任务上展示出强大的能力,为智能法律系统的进一步发展带来希望。近日,复旦大学数据智能与社会计算实验室(FudanDISC)发布大语言模型驱动的中

百度文心一言全面向全社会开放,率先迈出重要一步百度文心一言全面向全社会开放,率先迈出重要一步Aug 31, 2023 pm 01:33 PM

8月31日,文心一言首次向全社会全面开放。用户可以在应用商店下载“文心一言APP”或登录“文心一言官网”(https://yiyan.baidu.com)进行体验据报道,百度计划推出一系列经过全新重构的AI原生应用,以便让用户充分体验生成式AI的理解、生成、逻辑和记忆等四大核心能力今年3月16日,文心一言开启邀测。作为全球大厂中首个发布的生成式AI产品,文心一言的基础模型文心大模型早在2019年就在国内率先发布,近期升级的文心大模型3.5也持续在十余个国内外权威测评中位居第一。李彦宏表示,当文心

AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验Sep 20, 2023 pm 10:45 PM

保险行业对于社会民生和国民经济的重要性不言而喻。作为风险管理工具,保险为人民群众提供保障和福利,推动经济的稳定和可持续发展。在新的时代背景下,保险行业面临着新的机遇和挑战,需要不断创新和转型,以适应社会需求的变化和经济结构的调整近年来,中国的保险科技蓬勃发展。通过创新的商业模式和先进的技术手段,积极推动保险行业实现数字化和智能化转型。保险科技的目标是提升保险服务的便利性、个性化和智能化水平,以前所未有的速度改变传统保险业的面貌。这一发展趋势为保险行业注入了新的活力,使保险产品更贴近人民群众的实际

致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑Oct 07, 2023 pm 10:09 PM

不得不说,Llama2的「二创」项目越来越硬核、有趣了。自Meta发布开源大模型Llama2以来,围绕着该模型的「二创」项目便多了起来。此前7月,特斯拉前AI总监、重回OpenAI的AndrejKarpathy利用周末时间,做了一个关于Llama2的有趣项目llama2.c,让用户在PyTorch中训练一个babyLlama2模型,然后使用近500行纯C、无任何依赖性的文件进行推理。今天,在Karpathyllama2.c项目的基础上,又有开发者创建了一个启动Llama2的演示操作系统,以及一个

快手黑科技“子弹时间”赋能亚运转播,打造智慧观赛新体验快手黑科技“子弹时间”赋能亚运转播,打造智慧观赛新体验Oct 11, 2023 am 11:21 AM

杭州第19届亚运会不仅是国际顶级体育盛会,更是一场精彩绝伦的中国科技盛宴。本届亚运会中,快手StreamLake与杭州电信深度合作,联合打造智慧观赛新体验,在击剑赛事的转播中,全面应用了快手StreamLake六自由度技术,其中“子弹时间”也是首次应用于击剑项目国际顶级赛事。中国电信杭州分公司智能亚运专班组长芮杰表示,依托快手StreamLake自研的4K3D虚拟运镜视频技术和中国电信5G/全光网,通过赛场内部署的4K专业摄像机阵列实时采集的高清竞赛视频,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version