


When it comes to special effects gameplay, Douyin’s ability to “do all the work” has always been obvious to all. Recently, a "cartoon face" special effect has been in the limelight. No matter men, women or children, after using this special effect, they will look as smart and cute as the characters coming out of Disney animation. Once "Cartoon Face" was launched, it quickly fermented on Douyin and was deeply loved by users. "One-click transformation into a tall and sweet cartoon face", "All the fugitive princesses on Douyin are here", "Show off your baby with cartoon face style", "Prince Related hot spots such as "Princess Sprinkling Sugar Gesture Dance" and "Capturing the Moment of Fairy Tale Magic Failure" are constantly growing. Among them, "All the fugitive princesses from Douyin are here" and "Capturing the Moment of Fairy Tale Magic Failure" have even appeared on Douyin's national hot list. Currently, the number of users of this special effect has exceeded 9 million.
"Cartoon face" is a 3D style special effect. The difficulties in the development of this type of special effects are mainly due to the difficulty in obtaining diverse CG training data, the difficulty in restoring vivid expressions, and the difficulty in realistic fit. Three-dimensional skin light and shadow are difficult to achieve, and facial features deformation GAN with exaggerated and strong style is difficult to learn. In this regard, ByteDance's intelligent creation team has focused on breakthrough optimization in the direction of 3D stylization, which not only solved all the above problems, but also precipitated a set of universal technical solutions.
Innovation in the R&D process behind "Cartoon Face"
In the past, a complete 3D stylization The research and development process is divided into the following modules:
Collect a number of original style pictures -> train the StyleGan large model -> generate paired data -> manually select available paired data designer P pictures Optimize -> train p2p small model, and then repeat.
The problems with the traditional R&D process are very obvious: the iteration cycle is long, the designer’s participation is weak, and it is not easy to Precipitation and reuse.
In the research and development of the "cartoon face" special effect, the ByteDance intelligent creation team adopted an innovative research and development process:
Start from the designer producing the target style effect. The designer provides some 3D art materials according to the requirements agreed by the algorithm. Then the ByteDance intelligent creation team uses DCC software to batch render a number of diverse CG data. During the rendering process, the technical team introduced the most popular AIGC technology for the first time to enhance the data, then used GAN to synthesize the paired data required for training, and finally used the self-developed deformation pix2pix model for training to obtain the final effect.
R&D flow chart of “Cartoon Face” by ByteDance Intelligent Creation Team
It can be seen from the process link that this method greatly reduces the iteration cycle, improves the degree of automation, and allows designers to have a higher degree of participation. Practice shows that innovative engineering links The iteration cycle is reduced from 6 months to 1 month, and the solution is easier to accumulate and reuse.
How the "cartoon face" special effect is designed
Nowadays, there are more and more transformation special effects on social media, and people pay more and more attention to special effects Aesthetics and accuracy, in order to allow users to better achieve the effect of stylized transformation, the designers of Douyin special effects have carefully researched, combined with popular animation styles, and innovatively designed a set of cartoon face special effects to allow users to You can experience the animation-like and flexible character style, and at the same time meet the user's needs to become more beautiful and handsome.
Douyin special effects designers conducted in-depth research on the existing transformation special effects on the market and found that the existing special effects have problems such as insufficient style, insufficient exaggeration of expressions, and insufficient realistic lighting effects. Therefore, Douyin's special effects designers redesigned the style of cartoon faces based on domestic aesthetics, exaggerating the facial proportions of men and women, and reconstructing them into "girls" with cute round faces and smart facial features, and "boys" with tough, long faces and handsome facial features. During this process, the designer retained the user's own hair, enhanced the fluffiness and glossiness of the hair, and made it more natural to blend with the cartoon face. The cartoon-textured skin also incorporated the details of the user's own skin, making the special effects more natural. More personalized characteristics of the user.
In addition, the designers of Douyin special effects also defined the texture of light and shadow under different lights to meet the needs of lighting restoration in complex scenes, making the cartoon face more three-dimensional and natural, integrating into daily life There’s nothing wrong with taking selfies. Finally, the designer also created exaggerated symbolic facial expressions, used facial capture technology to generate facial expression CG data for digital human assets, and continuously improved training data and algorithms to produce expression effects that can more vividly display the user's personality.
Self-built CG synthetic data stream, high-quality training data can be reused
The source of training data for 3D style special effects relies on high-quality CG rendering data, and is The diversity of data distribution requirements is relatively high. At the same time, manual modeling of 3D assets is also a very labor-intensive process, and the reusability is also insufficient. Often a project spends expensive manpower and time to produce a batch of 3D assets. At the end of the project It was later completely abandoned.
This time, ByteDance’s intelligent creation team has built a set of universal and easy-to-expand CG synthesis data workflow.
Flowchart of the CG synthesis data flow of the Bytedance Intelligent Creation Team
The workflow of this synthetic data flow is as follows:
1. Programmatically generate digital assets through Houdini, programmatically pinch faces, bind bones, and adjust Weights, etc., to establish a realistic digital human model asset library.
#Diversity 3D digital assets
2. Build a USD template through Houdini's Solaris, and combine assets such as hair, fur, head model, clothing, expression coefficients, etc. USD reference import.
Skin map sample
Iris map sample
3. Use Houdini’s PDG to modify assets, camera angles, lighting environments, etc. Random combination. Use PDG to control workitem to accurately control data distribution.
Automated PDG node graph
Since the research and development process requires frequently providing a large amount of rendering data for effect iteration, this requires a lot of computing power costs and rendering waiting time. Previously, the team spent millions on external farms for data rendering on Douyin’s “Magic Transformation” special effects. As for the "cartoon face" special effects, the team relied on the solid infrastructure of ByteDance's cloud platform Volcano Engine to greatly reduce computing power costs.
ByteDance’s intelligent creation team referred to the processes of the film and television industry and built a self-developed rendering farm platform. It can split offline tasks into several rendering machines for parallel processing. Through the Volcano Engine mirroring platform for image hosting, the resource pooling platform for resource application and release, the cpu/gpu cluster for dynamic expansion and contraction of containers, and the use of nas for asset management, the rendering farm has one-click expansion of thousands of rendering nodes. The ability to calculate efficiently.
Based on this, the ByteDance intelligent creation team customized the single-task processing logic, including pre-processing, engine rendering, post-processing and other steps. And dynamically expand/shrink the cluster size at any time as needed to maximize the use of computing resources.
In order to further improve efficiency and make it easier for designers to participate in effect optimization, the technical team also created a Feishu applet for designers to use, which can trigger automated processes in the cloud through Feishu To iterate on the art effects, after the cloud task is completed, a message will be sent back to Feishu for designers to view, which greatly improves the efficiency of designers' work.
At the same time, the ByteDance intelligent creation team customized the event driver (EventTrigger) and API to connect the farm, Feishu platform and cloud desktop platform to maximize the All in one concept. This allows designers and engineers to more conveniently complete collaborative research and development based on Feishu and Cloud Desktop.
Self-developed rendering farm platform
AIGC New Application of technology
With the advent of DALL・E, the ByteDance intelligent creation team began the follow-up and planning of related technologies in early 2021. The ByteDance intelligent creation team developed the Stable Diffusion open source model On the basis of , a data set with a data volume of one billion was constructed, and two models were trained. One is a general-purpose model Diffusion Model, which can generate pictures in the style of oil paintings and ink paintings; the other is an animation-style Diffusion Model. .
Not long ago, the "AI painting" special effects supported by ByteDance's intelligent creation team became popular on Douyin, using this new technology. This time on Douyin's "Cartoon Face", the technical team further explored the Diffusion Model's ability to generate 3D cartoon styles and adopted a picture-generating strategy. They first added noise to the picture, and then used the trained Vincentian graph model. Guided denoising of text. Based on a pre-trained Stable Diffusion model, input the target 3D style result image generated by GAN that matches the real person image, and guide the target style closer to the desired direction through a set of finely tuned text keywords. Stable Diffusion outputs The result is used as the final data and handed over to the subsequent GAN model for learning.
Self-developed deformation GAN model
Since the target style of Douyin’s “cartoon face” has a larger deformation compared with the original portrait, it is difficult to directly use the traditional p2p framework To achieve high-quality training results, ByteDance's intelligent creation team has self-developed a set of p2p deformation GAN training framework, which has a good effect on training large-deformation, strong-style cartoon targets. The deformation GAN training framework self-developed by ByteDance’s intelligent creation team consists of two parts:
#1. Stylized preliminary training to extract cartoon face stylized information. The technical team built a non-paired training framework for interactive fusion of stylized information. By inputting real-person and cartoon face data sets into the framework, cartoon face stylized information can be extracted. This framework is an end-to-end training framework that includes stylized feature encoding, feature fusion, reconstruction training and stylized preliminary training. After the training is completed, a cartoon face stylized information is obtained for the next step of refined training.
#2. Integrate cartoon face stylized information and conduct precise training. The stylized information of the cartoon face obtained in the first step includes information such as style and deformation. This part of the information is integrated into the real-person image for refined training. P2P-related strong supervision loss is used for pairing training. After the training converges, the cartoon face model is obtained. .
Based on the above innovative technical solutions, Douyin’s “Cartoon Face” not only simplifies engineering links and greatly improves iteration efficiency, but also improves large angles, rich expressions, effect style restoration, light and shadow consistency, and multiple skin colors. Obvious optimization effects have been achieved in aspects such as matching. It is understood that the ByteDance intelligent creation team responsible for the "Cartoon Face" project has been focusing on breakthrough optimization in the direction of 3D stylization since 2021. This technical solution has supported a variety of 3D style special effects and achieved popular results on the platform.
About Bytedance Intelligent Creation Team:
##The Intelligent Creation Team is Bytedance AI & Multimedia Technology The middle platform supports many of the company's product lines such as Douyin, Jianying, and Toutiao by building leading technologies such as computer vision, audio and video editing, and special effects processing. At the same time, it provides external ToB partners with industry-leading intelligent creation capabilities through the Volcano Engine. and industry solutions.
The above is the detailed content of The 'cartoon face' special effects technology used by more than 9 million people on Douyin is revealed. For more information, please follow other related articles on the PHP Chinese website!

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Dreamweaver CS6
Visual web development tools

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool
