Home  >  Article  >  Technology peripherals  >  Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

王林
王林forward
2024-03-01 11:34:02603browse

With Alibaba’s EMO, it has become easier to “move, speak or sing” with AI-generated or real images.

Recently, the Vincent video model represented by OpenAI Sora has become popular again.

In addition to text-based video generation, human-centered video synthesis has always attracted much attention. For example, focus on “speaker head” video generation, where the goal is to generate facial expressions based on user-provided audio clips.

On a technical level, generating expressions requires accurately capturing the subtle and diverse facial movements of the speaker, which is a huge challenge for similar video synthesis tasks.

Traditional methods usually impose some limitations to simplify the video generation task. For example, some methods utilize 3D models to constrain facial key points, while others extract head motion sequences from raw videos to guide overall motion. While these limitations reduce the complexity of video generation, they also limit the richness and naturalness of the final facial expressions.

In a recent paper published by Ali Intelligent Computing Research Institute, researchers focused on exploring the subtle connection between audio cues and facial movements to improve the authenticity, naturalness and accuracy of the speaker’s head video. expressiveness.

Researchers have found that traditional methods often fail to adequately capture the facial expressions and unique styles of different speakers. Therefore, they proposed the EMO (Emote Portrait Alive) framework, which directly renders facial expressions through an audio-video synthesis method without using intermediate 3D models or facial landmarks.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

  • Paper title: EMO: Emote Portrait Alive- Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions

  • Paper address: https://arxiv.org/pdf/2402.17485.pdf

  • Project homepage: https://humanaigc.github.io/emote-portrait-alive/

In terms of effect, Alibaba’s method can ensure seamless frame transitions throughout the video and maintain identity consistency, thereby producing expressive and more realistic character avatar videos that are more expressive in performance. Significantly better than the current SOTA method in terms of power and realism.

For example, EMO can make the Tokyo girl character generated by Sora sing. The song is "Don't Start Now" sung by the British/Albanian dual-national female singer Dua Lipa. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectlyEMO supports songs in different languages ​​including English and Chinese. It can intuitively identify the tonal changes of the audio and generate dynamic and expressive AI character avatars. For example, let the young lady generated by the AI ​​painting model ChilloutMix sing "Melody" by Tao Zhe. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

#EMO can also allow the avatar to keep up with fast-paced Rap songs, such as asking DiCaprio to perform a section of "Godzilla" by American rapper Eminem. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectlyOf course, EMO not only allows characters to sing, but also supports spoken audio in various languages, turning different styles of portraits, paintings, as well as 3D models and AI-generated content into lifelike animated videos . Such as Audrey Hepburn's talk. Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Finally, EMO can also achieve linkage between different characters, such as Gao Qiqiang linking up with Teacher Luo Xiang in "Cyclone". Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Method Overview

Given a single reference image of a character's portrait, our method can generate a video that is synchronized with the input speech audio clip, while also retaining the character's very natural head movements and vivid expressions, and matching the pitch of the provided voice audio. Coordinate changes. By creating a seamless series of cascading videos, the model helps generate long videos of talking portraits with consistent identity and coherent motion, which are critical for real-world applications.

Network Pipeline

The overview of the method is shown in the figure below. The backbone network receives multiple frames of noise potential input and attempts to denoise them into consecutive video frames at each time step. The backbone network has a similar UNet structure configuration to the original SD 1.5 version, specifically

  1. Similar to previous work, in order to ensure continuity between generated frames, the backbone network embeds a temporal module.

  2. In order to maintain the ID consistency of the portraits in the generated frames, the researchers deployed a UNet structure parallel to the backbone network, called ReferenceNet, which inputs the reference image to obtain the reference features.

  3. In order to drive the movement of the character when speaking, the researchers used an audio layer to encode the sound characteristics.

  4. In order to make the speaking character's movements controllable and stable, the researchers used face locators and velocity layers to provide weak conditions.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

For the backbone network, the researchers did not use hint embedding, so they adjusted the cross-attention layer in the SD 1.5 UNet structure to the reference attention force layer. These modified layers will take reference features obtained from ReferenceNet as input instead of text embeddings.

Training strategy

The training process is divided into three stages:

The first stage is image pre-training, in which the backbone network, ReferenceNet and facial positioning The network is incorporated into the training process, where the backbone network takes a single frame as input, while the ReferenceNet processes different, randomly selected frames from the same video clip. Both Backbone and ReferenceNet initialize weights from raw SD.

In the second stage, the researchers introduced video training, added a temporal module and an audio layer, and sampled n f consecutive frames from the video clip, of which the first n frames were motion frames. The time module initializes the weights from AnimateDiff.

The last stage integrates the speed layer, and the researcher only trains the time module and speed layer in this stage. This approach is done to intentionally ignore the audio layer during training. Because the frequency of the speaker's expression, mouth movement, and head movement is mainly affected by the audio. Therefore, there appears to be a correlation between these elements, and the model may drive the character's movement based on velocity signals rather than audio. Experimental results show that training the speed layer and the audio layer simultaneously weakens the ability of audio to drive character movement.

Experimental results

The methods involved in the comparison during the experiment include Wav2Lip, SadTalker, and DreamTalk.

Figure 3 shows the comparison results of this method with previous methods. It can be observed that when provided with a single reference image as input, Wav2Lip typically synthesizes a blurred mouth region and generates videos characterized by static head poses and minimal eye movements. In the case of DreamTalk, the results can distort the original face and also limit the range of facial expressions and head movements. Compared with SadTalker and DreamTalk, the method proposed in this study is able to generate a larger range of head movements and more vivid facial expressions.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

The study further explores avatar video generation in various portrait styles, such as realistic, anime, and 3D. The characters were animated using the same vocal audio input, and the results showed that the resulting videos produced roughly consistent lip sync across the different styles.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Figure 5 shows that our method can generate richer facial expressions and actions when processing audio with obvious tonal characteristics. For example, in the third line of the picture below, a high pitch will trigger a stronger, more vivid expression in the character. Additionally, motion frames allow you to extend the generated video, i.e. generate a longer duration video based on the length of the input audio. As shown in Figures 5 and 6, our method preserves the character's identity in extended sequences even during large movements.

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

Table 1 The results show that this method has significant advantages in video quality assessment:

Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly

The above is the detailed content of Let Sora Tokyo girl sing, Gao Qiqiang change his voice to Luo Xiang, and the Alibaba character lip sync video is generated perfectly. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete