


AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.
After Sora, there is actually a new AI video model, which is so amazing that everyone likes it and praises it!
Pictures
With it, the villain of "Kuronics" Gao Qiqiang transforms into Luo Xiang, and he can educate everyone (dog head).
This is Alibaba’s latest audio-driven portrait video generation framework, EMO (Emote Portrait Alive).
With it, you can generate an AI video with vivid expressions by inputting a single reference image and a piece of audio (speech, singing, or rap can be used). The final length of the video depends on the length of the input audio.
You can ask Mona Lisa, a veteran contestant of AI effects experience, to recite a monologue:
Here comes the young and handsome little plum. During this fast-paced RAP talent show, there was no problem keeping up with the mouth shape:
Even the Cantonese lip-syncs could be held, which allowed my brother Leslie Cheung to sing Eason Chan's " Unconditional》:
#In short, whether it is to make the portrait sing (different styles of portraits and songs), to make the portrait speak (different languages), or to make all kinds of "pretentious" The cross-actor performance and the EMO effect left us stunned for a moment.
Netizens lamented: "We are entering a new reality!"
The 2019 version of "Joker" said the lines of the 2008 version of "The Dark Knight"
Some netizens have even started to pull videos of EMO generated videos and analyze the effect frame by frame.
As shown in the video below, the protagonist is the AI lady generated by Sora. The song she sang for you this time is "Don’t Start Now".
Commenters analyzed:
The consistency of this video is even better than before!
In the more than one minute video, the sunglasses on Ms. Sora’s face barely moved, and her ears and eyebrows moved independently.
The most exciting thing is that Ms. Sora’s throat seems to be really breathing! Her body trembled and moved slightly while singing, which shocked me!
##Picture
Picture
Picture
EMO is not based on a DiT-like architecture, that is, Transformer is not used to replace traditional UNet. Its backbone network is modified from Stable Diffusion 1.5.
Specifically, EMO is an expressive audio-driven portrait video generation framework that can generate videos of any duration based on the length of the input video.
Picture
The framework mainly consists of two stages:
- Frame encoding stage
Deploy a UNet network called ReferenceNet, which is responsible for extracting features from reference images and frames of videos.
- Diffusion stage
First, the pre-trained audio encoder processes the audio embedding, and the face region mask is combined with multi-frame noise to control the generation of the face image .
The backbone network then leads the denoising operation. Two types of attention are applied in the backbone network, reference attention and audio attention, which serve to maintain the identity consistency of the character and regulate the movement of the character respectively.
Additionally, the time module is used to manipulate the time dimension and adjust the speed of movement.
In terms of training data, the team built a large and diverse audio and video data set containing more than 250 hours of video and more than 15 million images.
The specific features of the final implementation are as follows:
- Videos of any duration can be generated based on the input audio while ensuring character identity consistency (the longest single video given in the demonstration is 1 minute 49 seconds).
- Supports talking and singing in various languages (the demo includes Mandarin, Cantonese, English, Japanese, Korean)
- Supports different painting styles (photos, traditional paintings, comics, 3D rendering, AI digital person)
Picture
The quantitative comparison is also greatly improved compared to the previous method to obtain SOTA, only measuring mouth shape The SyncNet indicator of synchronization quality is slightly inferior.
Picture
Compared with other methods that do not rely on diffusion models, EMO is more time-consuming.
And since no explicit control signals are used, which may lead to the inadvertent generation of other body parts such as hands, a potential solution is to use control signals specifically for body parts.
EMO’s team
Finally, let’s take a look at the people on the team behind EMO.
The paper shows that the EMO team comes from Alibaba Intelligent Computing Research Institute.
There are four authors, namely Linrui Tian, Qi Wang, Bang Zhang and Liefeng Bo.
Picture
Among them, Liefeng Bo is the current head of the XR laboratory of Alibaba Tongyi Laboratory.
Dr. Bo Liefeng graduated from Xi'an University of Electronic Science and Technology. He has engaged in postdoctoral research at Toyota Research Institute of the University of Chicago and the University of Washington. His research directions are mainly ML, CV and robotics. Its Google Scholar citations exceed 13,000.
Before joining Alibaba, he first served as chief scientist at Amazon’s Seattle headquarters, and then joined JD Digital Technology Group’s AI laboratory as chief scientist.
In September 2022, Bo Liefeng joined Alibaba.
Picture
EMO is not the first time Alibaba has achieved success in the AIGC field.
Picture
OutfitAnyone with AI one-click dress-up.
Pictures
There is also AnimateAnyone, which makes cats and dogs all over the world dance the bath dance.
This is the one below:
Picture
Now that EMO is launched, many netizens are lamenting that Alibaba has accumulated some technology on it .
Picture
If all these technologies are combined now, the effect will be...
I don’t dare to think about it, but I’m really looking forward to it.
Picture
In short, we are getting closer and closer to "send a script to AI and output the entire movie".
Picture
One More Thing
Sora represents a cliff-edge breakthrough in text-driven video synthesis.
EMO also represents a new level of audio-driven video synthesis.
Although the two tasks are different and the specific architecture is different, they still have one important thing in common:
There is no explicit physical model in the middle, but they both simulate physical laws to a certain extent. .
Therefore, some people believe that this is contrary to Lecun's insistence that "modeling the world for actions by generating pixels is wasteful and doomed to failure" and supports Jim Fan's "data-driven world model" idea. .
Picture
Various methods have failed in the past, but the current success may really come from "Bitter Lessons" by Sutton, the father of reinforcement learning. Vigorously miracle.
Enabling AI to discover like people, rather than containing what people discover
Breakthrough progress is ultimately achieved by scaling up computing
Paper: https://www.php.cn/link/a717f41c203cb970f96f706e4b12617bGitHub:https://www.php.cn/link/e43a09ffc30b44cb1f0db46f87836f40
Reference Link:
[1]https://www.php.cn/link/0dd4f2526c7c874d06f19523264f6552
The above is the detailed content of AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.. For more information, please follow other related articles on the PHP Chinese website!

The unchecked internal deployment of advanced AI systems poses significant risks, according to a new report from Apollo Research. This lack of oversight, prevalent among major AI firms, allows for potential catastrophic outcomes, ranging from uncont

Traditional lie detectors are outdated. Relying on the pointer connected by the wristband, a lie detector that prints out the subject's vital signs and physical reactions is not accurate in identifying lies. This is why lie detection results are not usually adopted by the court, although it has led to many innocent people being jailed. In contrast, artificial intelligence is a powerful data engine, and its working principle is to observe all aspects. This means that scientists can apply artificial intelligence to applications seeking truth through a variety of ways. One approach is to analyze the vital sign responses of the person being interrogated like a lie detector, but with a more detailed and precise comparative analysis. Another approach is to use linguistic markup to analyze what people actually say and use logic and reasoning. As the saying goes, one lie breeds another lie, and eventually

The aerospace industry, a pioneer of innovation, is leveraging AI to tackle its most intricate challenges. Modern aviation's increasing complexity necessitates AI's automation and real-time intelligence capabilities for enhanced safety, reduced oper

The rapid development of robotics has brought us a fascinating case study. The N2 robot from Noetix weighs over 40 pounds and is 3 feet tall and is said to be able to backflip. Unitree's G1 robot weighs about twice the size of the N2 and is about 4 feet tall. There are also many smaller humanoid robots participating in the competition, and there is even a robot that is driven forward by a fan. Data interpretation The half marathon attracted more than 12,000 spectators, but only 21 humanoid robots participated. Although the government pointed out that the participating robots conducted "intensive training" before the competition, not all robots completed the entire competition. Champion - Tiangong Ult developed by Beijing Humanoid Robot Innovation Center

Artificial intelligence, in its current form, isn't truly intelligent; it's adept at mimicking and refining existing data. We're not creating artificial intelligence, but rather artificial inference—machines that process information, while humans su

A report found that an updated interface was hidden in the code for Google Photos Android version 7.26, and each time you view a photo, a row of newly detected face thumbnails are displayed at the bottom of the screen. The new facial thumbnails are missing name tags, so I suspect you need to click on them individually to see more information about each detected person. For now, this feature provides no information other than those people that Google Photos has found in your images. This feature is not available yet, so we don't know how Google will use it accurately. Google can use thumbnails to speed up finding more photos of selected people, or may be used for other purposes, such as selecting the individual to edit. Let's wait and see. As for now

Reinforcement finetuning has shaken up AI development by teaching models to adjust based on human feedback. It blends supervised learning foundations with reward-based updates to make them safer, more accurate, and genuinely help

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

Notepad++7.3.1
Easy-to-use and free code editor

Zend Studio 13.0.1
Powerful PHP integrated development environment

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Atom editor mac version download
The most popular open source editor
