Home  >  Article  >  Technology peripherals  >  Tencent Hunyuan joins forces with Hong Kong University of Science and Technology and Tsinghua University to launch "Follow Your Emoji", turning photos into emoticons with one click

Tencent Hunyuan joins forces with Hong Kong University of Science and Technology and Tsinghua University to launch "Follow Your Emoji", turning photos into emoticons with one click

王林
王林Original
2024-06-14 20:35:50677browse

Tusheng Video has a new way to play.

Tencent Hunyuan, Hong Kong University of Science and Technology, and Tsinghua University jointly launched the portrait animation generation framework "Follow Your Emoji", which can generate any style of facial animation through facial skeleton information. Based on algorithm innovation and data accumulation, "Follow Your Emoji" can support refined control of the face, including eyebrows, eyes, eye rolls and other details. Animal emoticons can also be easily "manipulated".

腾讯混元联合港科大及清华推出「Follow Your Emoji」, 一键让照片变表情包

Follow Your Emoji not only supports the generation of multiple portraits with a single expression, but also supports the generation of multiple expressions from a single portrait.

腾讯混元联合港科大及清华推出「Follow Your Emoji」, 一键让照片变表情包

#In recent years, diffusion models have demonstrated better generative capabilities than older adversarial generative networks (GANs). Some methods utilize powerful basic diffusion models for high-quality video and image generation, but these basic models cannot directly preserve the identity features of the reference portrait during the animation process, causing the video results to show distortion and unrealistic artifacts. This is one of the main challenges of portrait animation tasks.

腾讯混元联合港科大及清华推出「Follow Your Emoji」, 一键让照片变表情包

Figure: The overall flow chart of the paper, the upper part is the training process, and the lower part is the testing process

In this study, the researcher proposed a novel A diffusion model-based portrait animation framework for Follow-Your-Emoji. There are two major innovations in the algorithm.

This expression control signal can effectively guide animation generation. Researchers locate information through portrait (face) 3D keypoints. Since 3D keypoints have inherent canonical properties, they can effectively align target actions with reference portraits and avoid distortion that would lead to facial deformation in the generated video. This technology has a wide range of applications and can be used to produce facial morphing videos.

Secondly, the study also proposes a facial fine-grained loss function to help the model focus on capturing subtle expression changes and the detailed appearance of the portrait in the reference photo. Specifically, the author first uses the facial mask and expression mask and the author's expression perception signal, and then calculates the spatial distance between the ground truth value and the prediction result in these mask areas to achieve a high degree of restoration of the original portrait by the emoticon package.

In order to train the model, this study also constructed a high-quality expression training data set, which contains 18 exaggerated expressions and 20 minutes of real-life videos from 115 subjects. At the same time, the study adopts a progressive generation strategy, which enables the method to be extended to long-term animation synthesis with high fidelity and stability.

腾讯混元联合港科大及清华推出「Follow Your Emoji」, 一键让照片变表情包

Figure: The quantitative experimental results and qualitative experimental results of the paper show that compared with the previous method, Follow-Your-Emoji can achieve better results

Finally, to address the lack of benchmarks in the field of portrait animation, the study also introduces a comprehensive benchmark called EmojiBench, which includes 410 portrait animation videos of various styles, showing a wide range of facial expressions and head poses. A comprehensive evaluation of Follow-YourEmoji using EmojiBench shows that the method performs well when dealing with portraits and actions outside the training domain, and performs better both quantitatively and qualitatively than existing baseline methods. Well, provides excellent visual fidelity identity representation and precise motion rendering.

Website: Follow-Your-Emoji: Freestyle Portrait Animation

Paper: [2406.01900] Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation

The above is the detailed content of Tencent Hunyuan joins forces with Hong Kong University of Science and Technology and Tsinghua University to launch "Follow Your Emoji", turning photos into emoticons with one click. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn