Home >Technology peripherals >AI >Douyin dance does not require real people to appear on camera, just a photo can generate a high-quality video! Even the face-hugging CTO has experienced Byte's new technology
look! Now there are four young ladies showing off their hot dances in front of you:
#Thought it was a work released by some anchors on a short video platform?
No, No, No.
The real answer is: fake, generated, and it only relies on a picture!
The real way to start is this:
This is the latest from the National University of Singapore and ByteDance A study called MagicAnimate.
Its function can be summed up in a simple formula: a picture and a set of actions = a video without any sense of violation. .
With the announcement of this technology, there was an uproar in the technology circle, and many technology giants and geeks joined in one after another
Even HuggingFace CTO They all tried it with their own avatars:
By the way, they also made a joke in a humorous way:
Is this considered fitness? I can skip the gym this week.
There are also netizens who are quite up to date with the times, playing with the characters in the trailer of the newly released GTA6(Grand Theft Auto 6) A handful:
Even emoticons have become the object of choice for netizens...
MagicAnimate can be said to have focused the attention of the technology circle on itself, so some netizens joked:
OpenAI can take a break.
#Fire, really fire.
MagicAnimate is so popular, how to use it?
Without further ado, let’s experience it step by step.
Currently, our project team has opened an online experience page on HuggingFace
##The operation is very simple and only requires three steps:For example, the following are my photos and the dance clip of "Subject Three" that is popular around the world:
##△Video source: Douyin (ID: QC0217)You can also choose the template provided at the bottom of the page to experience it: However, it should be noted that because MagicAnimate is currently too popular, "" may appear during the generation process. "Downtime" situation: Even if you successfully "eat", you may have to queue up
......
(That’s right! As of press time, we still haven’t waited for the result!)
In addition, MagicAnimate also provides a local experience method in GitHub. Interested friends You can try it~ #Then the next question is:How to do it? Overall, MagicAnimate adopts a framework based ondiffusion model(diffusion). The purpose is to enhance temporal consistency and maintain the authenticity of the reference image. and improve animation fidelity.
To this end, the team first developed aVideo Diffusion Model (Temporal Consistency Modeling) to encode temporal information.
This model encodes temporal information by adding a temporal attention module to the diffusion network to ensure temporal consistency between frames in the animation. Secondly, in order to maintain appearance consistency between frames, the team introduced a newAppearance Encoder(Appearance Encoder) to preserve the complex details of the reference image .
This encoder is different from previous methods of using CLIP encoding. It is able to extract denser visual features to guide animation production and thus better preserve information such as identity, background and clothing.Based on these two innovative technologies, the team further adopted a simple video fusion technology(Video Fusion Technique) to promote the smooth transition of long video animations.
Finally, after verification by two benchmark experiments, the results show that MagicAnimate is far more effective than previous methodsEspecially on the challenging TikTok dance data set, MagicAnimate has better performance in video preservation. The accuracy is more than 38% higher than the strongest baseline! The following is a qualitative comparison given by the team: And compared with the state-of-the-art baseline model of cross-ID, our results are as follows:I have to say that projects like MagicAnimate have been really popular recently
No, after its "debut" Not long ago, the Ali team also released a project called Animate Anyone, which also only requires "a picture" and "the desired action":
As a result, some netizens also raised questions:
This seems to be a war between MagicAnimate and AnimateAnyone. Who is better?
What do you think?
Please click the following link to view the paper: https://arxiv.org/abs/2311.16498
The above is the detailed content of Douyin dance does not require real people to appear on camera, just a photo can generate a high-quality video! Even the face-hugging CTO has experienced Byte's new technology. For more information, please follow other related articles on the PHP Chinese website!