Home > Article > Technology peripherals > Use AI to draw yourself into the animation, which received 1.5 million+ views in 3 days. Professional animator: I was scared.
Just record a video and AI will put you perfectly into the animation!
Whether it is the lines, colors or light and shadow presentation, it is exactly the same as the realistic style in American comics. The animation is also delicate and smooth, and the frame rate is obviously not low. :
How much effort does it take to produce such an anime that contains 120 VFX (visual effects) shots?
Only 3 people are enough, and there is no need for a team of costumes and props.
In contrast, animation special effects for many previously filmed blockbusters often required a lot of effort, and well-made teams even numbered hundreds of people.
After this anime was put on YouTube, it received 1.5 million views and 110,000 likes in three days.
There are already painters who feel scared after reading this:
I have devoted my life to painting... However, these things will disappear in a few years. replaced.
Some netizens felt that "a childhood dream has come true":
When I was young, I always thought that animations were based on real videos Made. Now that I see this, I think it’s super cool.
#So, how is such an anime produced?
The process is divided into three parts.
The first part is to use AI to create and generate animation characters and establish the relationship between them and actors; the second part is to create animation scenes; the third part is to synthesize and adjust the final animation effect.
Let’s first take a look at the first part of using AI to redraw characters in animation. This part was completed using Stable Diffusion, and the DreamBooth diffusion model launched by Google was used for fine-tuning.
Take this character in the anime as an example. The upper left corner is the actor prototype, and the lower right corner is the anime character created by AI:
This is not The effect can be achieved by directly putting the video frame by frame into Stable Diffusion.
In order to ensure that the character's anime style achieves the effect they want, the authors collected a large number of character screenshots from the classic anime "Vampire Hunter D", from various angles, and fed them to the AI:
At the same time, AI needs to learn and remember various detailed features of real people from face to body.
Therefore, it is necessary to take a large number of photos of the actors in different lighting, multiple angles, and various actions in advance:
In this way, you can When the AI saw the actor in the video, it quickly drew an anime-style character that looked like the actor based on the painting style of "Vampire Hunter D".
As for some face magnification shots that are difficult to control, ControlNet is also used to further control the generated effects.
However, there is still a problem with animations generated directly by AI.
Even if the difference between frames is not big, the connection is not completely coherent. Some "repeatedly jumping" hair tips and light and shadow details will cause the picture to flicker crazily:
The authors thought of using the DEFlicker de-flicker plug-in to solve this problem, and at the same time lower the frame rate slightly to ensure the smoothness of the picture.
The second part is to generate the scene. In this part, the authors directly used Unreal Engine and some of the 3D models that come with it to create the animation background.
For example, this rotating background looks very cool:
The background is made by combining a large number of scene photos and scrolling quickly:
The last part is the polishing after the synthesis .
This part was done by the team themselves. They added some 3D models (such as flickering candles) to the video to increase the immersion of the characters:
On the other hand, a lot of retro visual effects have been added, making the animation look a lot more sophisticated:
In the end, this AI assisted in the creation anime.
The team behind this animation creation, Corridor Crew, comes from an American production studio called Corridor Digital.
Corridor Digital was founded in 2009. Over the years, it has created many popular short videos focusing on special effects.
The more well-known one is the 2019 "Atlas' Counterattack" (New Robot Can Now Fight Back!) that spoofed the Boston Dynamics robot.
In the video, Atlas was repeatedly bullied and abused by people, and finally couldn't bear it anymore and launched an attack on humans:
This video was once thought to be a real video leaked from Boston Dynamics, until Corridor Digital came forward to deny it.
Later, Boston Dynamics also officially refuted the rumor, saying that this was just a CGI produced by Corridor Digital.
The Corridor Crew team under Corridor Digital Studio is responsible for doing "revealing programs" that specifically explain the special effects and production technical processes behind various blockbusters. Team members will screen some TV series and movies, discuss and break them down. visual effects shots.
They have also invited many special effects artists and actors to be guests on the show to observe their reactions to various special effects.
Regarding the AI animation produced by the team, some netizens joked:
How about inviting animators next time? Check out their reactions to this AI anime.
So, what do you think of the animation effect made by AI?
Reference link:
[1]https://www.php.cn/link/f3957fa3bea9138b3f54f0e18975a30c
[2 ]https://www.php.cn/link/532435c44bec236b471a47a88d63513d
[3]https://twitter.com/bilawalsidhu/status/1631043203515449344
The above is the detailed content of Use AI to draw yourself into the animation, which received 1.5 million+ views in 3 days. Professional animator: I was scared.. For more information, please follow other related articles on the PHP Chinese website!