Home >Technology peripherals >AI >AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

王林
王林forward
2024-02-29 19:07:02433browse

After Sora, there is actually a new AI video model, which is so amazing that everyone likes it and praises it!

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Pictures

With it, the villain of "Kuronics" Gao Qiqiang transforms into Luo Xiang, and he can educate everyone (dog head).

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

This is Alibaba’s latest audio-driven portrait video generation framework, EMO (Emote Portrait Alive).

With it, you can generate an AI video with vivid expressions by inputting a single reference image and a piece of audio (speech, singing, or rap can be used). The final length of the video depends on the length of the input audio.

You can ask Mona Lisa, a veteran contestant of AI effects experience, to recite a monologue:

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

Here comes the young and handsome little plum. During this fast-paced RAP talent show, there was no problem keeping up with the mouth shape:

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

Even the Cantonese lip-syncs could be held, which allowed my brother Leslie Cheung to sing Eason Chan's " Unconditional》:

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

#In short, whether it is to make the portrait sing (different styles of portraits and songs), to make the portrait speak (different languages), or to make all kinds of "pretentious" The cross-actor performance and the EMO effect left us stunned for a moment.

Netizens lamented: "We are entering a new reality!"

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.The 2019 version of "Joker" said the lines of the 2008 version of "The Dark Knight"

Some netizens have even started to pull videos of EMO generated videos and analyze the effect frame by frame.

As shown in the video below, the protagonist is the AI ​​lady generated by Sora. The song she sang for you this time is "Don’t Start Now".

Commenters analyzed:

The consistency of this video is even better than before!
In the more than one minute video, the sunglasses on Ms. Sora’s face barely moved, and her ears and eyebrows moved independently.
The most exciting thing is that Ms. Sora’s throat seems to be really breathing! Her body trembled and moved slightly while singing, which shocked me!

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.##Picture

Having said that, EMO is a hot new technology, so it is inevitable to compare it with similar products——

Yesterday, the AI ​​video generation company Pika also launched a lip synchronization function for dubbing video characters and "lip syncing" at the same time, which was a big hit.

What is the specific effect? ​​We will put it here directly

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

After comparison, netizens in the comment area came to the conclusion that they were beaten by Ali.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

EMO released the paper and announced it was open source.

but! Although it is open source, there are still short positions on GitHub.

But again! Although it is a short position, the number of stars has exceeded 2.1k.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

It made netizens really anxious, as anxious as King Jiji.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.

Different architecture from Sora

As soon as the EMO paper came out, many people in the circle breathed a sigh of relief.

It is different from Sora’s technical route, which shows that copying Sora is not the only way.

EMO is not based on a DiT-like architecture, that is, Transformer is not used to replace traditional UNet. Its backbone network is modified from Stable Diffusion 1.5.

Specifically, EMO is an expressive audio-driven portrait video generation framework that can generate videos of any duration based on the length of the input video.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

The framework mainly consists of two stages:

  • Frame encoding stage

Deploy a UNet network called ReferenceNet, which is responsible for extracting features from reference images and frames of videos.

  • Diffusion stage

First, the pre-trained audio encoder processes the audio embedding, and the face region mask is combined with multi-frame noise to control the generation of the face image .

The backbone network then leads the denoising operation. Two types of attention are applied in the backbone network, reference attention and audio attention, which serve to maintain the identity consistency of the character and regulate the movement of the character respectively.

Additionally, the time module is used to manipulate the time dimension and adjust the speed of movement.

In terms of training data, the team built a large and diverse audio and video data set containing more than 250 hours of video and more than 15 million images.

The specific features of the final implementation are as follows:

  • Videos of any duration can be generated based on the input audio while ensuring character identity consistency (the longest single video given in the demonstration is 1 minute 49 seconds).
  • Supports talking and singing in various languages ​​(the demo includes Mandarin, Cantonese, English, Japanese, Korean)
  • Supports different painting styles (photos, traditional paintings, comics, 3D rendering, AI digital person)

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

The quantitative comparison is also greatly improved compared to the previous method to obtain SOTA, only measuring mouth shape The SyncNet indicator of synchronization quality is slightly inferior.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

Compared with other methods that do not rely on diffusion models, EMO is more time-consuming.

And since no explicit control signals are used, which may lead to the inadvertent generation of other body parts such as hands, a potential solution is to use control signals specifically for body parts.

EMO’s team

Finally, let’s take a look at the people on the team behind EMO.

The paper shows that the EMO team comes from Alibaba Intelligent Computing Research Institute.

There are four authors, namely Linrui Tian, ​​Qi Wang, Bang Zhang and Liefeng Bo.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

Among them, Liefeng Bo is the current head of the XR laboratory of Alibaba Tongyi Laboratory.

Dr. Bo Liefeng graduated from Xi'an University of Electronic Science and Technology. He has engaged in postdoctoral research at Toyota Research Institute of the University of Chicago and the University of Washington. His research directions are mainly ML, CV and robotics. Its Google Scholar citations exceed 13,000.

Before joining Alibaba, he first served as chief scientist at Amazon’s Seattle headquarters, and then joined JD Digital Technology Group’s AI laboratory as chief scientist.

In September 2022, Bo Liefeng joined Alibaba.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

EMO is not the first time Alibaba has achieved success in the AIGC field.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

OutfitAnyone with AI one-click dress-up.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Pictures

There is also AnimateAnyone, which makes cats and dogs all over the world dance the bath dance.

This is the one below:

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

Now that EMO is launched, many netizens are lamenting that Alibaba has accumulated some technology on it .

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

If all these technologies are combined now, the effect will be...

I don’t dare to think about it, but I’m really looking forward to it.

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

In short, we are getting closer and closer to "send a script to AI and output the entire movie".

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

One More Thing

Sora represents a cliff-edge breakthrough in text-driven video synthesis.

EMO also represents a new level of audio-driven video synthesis.

Although the two tasks are different and the specific architecture is different, they still have one important thing in common:

There is no explicit physical model in the middle, but they both simulate physical laws to a certain extent. .

Therefore, some people believe that this is contrary to Lecun's insistence that "modeling the world for actions by generating pixels is wasteful and doomed to failure" and supports Jim Fan's "data-driven world model" idea. .

AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.Picture

Various methods have failed in the past, but the current success may really come from "Bitter Lessons" by Sutton, the father of reinforcement learning. Vigorously miracle.

Enabling AI to discover like people, rather than containing what people discover

Breakthrough progress is ultimately achieved by scaling up computing

Paper: https://www.php.cn/link/a717f41c203cb970f96f706e4b12617bGitHub:https://www.php.cn/link/e43a09ffc30b44cb1f0db46f87836f40

Reference Link:
[1]https://www.php.cn/link/0dd4f2526c7c874d06f19523264f6552

##

The above is the detailed content of AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete