Home  >  Article  >  Technology peripherals  >  Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

WBOY
WBOYforward
2024-01-18 17:15:251294browse

I have to say that taking photos now is really"so easy to be ridiculous".

A real person doesn’t need to appear on camera, and doesn’t need to worry about posing or hairstyle. You only need an image of yourself and wait a few seconds to get 7 completely different styles. :

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

Look carefully, the shape/pose is all clearly done for you, and the original image comes out straight without any need for editing.

Before this, we must not spend at least a whole day in the photo studio, which will make us, the photographer, and the makeup artist almost exhausted.

The above is the power of an AI called InstantID.

In addition to realistic photos, it can also be "non-human":

For example, it has a cat head and a cat body, but if you look closely, it has your facial features.

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

Not to mention the various virtual styles:

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

Like style 2, a real person directly turns into a stone statue.

Of course, inputting a stone statue can also directly change it:

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

By the way, you can also perform two human face fusion high-power operations, Let’s look at what 20% of Yang Mi and 80% of Taylor look like:

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

One picture has unlimited high-quality transformations, but you have to figure it out.

So, how is this done?

Based on the diffusion model, it can be seamlessly integrated with SD

The author introduces that the current image stylization technology can already complete the task with only one forward inference (i.e. Based on ID embedding).

But this technology also has problems: it either requires extensive fine-tuning of numerous model parameters, lacks compatibility with community-developed pre-trained models, or cannot maintain high-fidelity facial features.

To solve these challenges, they developed InstantID.

InstantID is based on the diffusion model, and its plug-and-play (plug-and-play) module can skillfully handle various stylized transformations using only a single facial image. High fidelity indeed.

The most noteworthy thing is that it can be seamlessly integrated with popular text-to-image pre-trained diffusion models (such as SD1.5, SDXL) and can be used as a plug-in.

Specifically, InstantID consists of three key components:

(1) ID embedding that captures robust semantic face information;

(2) It has decoupling A lightweight adaptation module of cross-attention to facilitate images as visual cues;

(3) IdentityNet network, which encodes detailed features of reference images through additional spatial control, and ultimately completes image generation.

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

Compared with previous work in the industry, InstantID has several differences:

First, there is no need to train UNet, so the original text can be retained in the image model generation capabilities and is compatible with existing pre-trained models and ControlNet in the community.

The second is that test-time adjustment is not required, so for a specific style, there is no need to collect multiple images for fine-tuning, and only need to make an inference on a single image.

Third, in addition to achieving better facial fidelity, text editability is also retained. As shown in the picture below, with just a few words, you can change the gender of the image, change the suit, change the hairstyle and hair color.

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

#Again, all the above effects can be completed in a few seconds with just 1 reference image.

The experiment shown below proves that A few more reference pictures are of little use, and one picture can do a good job.

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

The following are some specific comparisons.

The comparison objects are the existing tuning-free SOTA methods: IP-Adapter (IPA), IP-Adapter-FaceID, and PhotoMaker just produced by Tencent two days ago.

It can be seen that everyone is quite "volume" and the effect is not bad - but if you compare them carefully, PhotoMaker and IP-Adapter-FaceID both have good fidelity, but their text control capabilities are obviously worse.

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

In contrast, InstantID’s faces and styles blend better, achieving better fidelity while retaining good text editability sex.

In addition, there is also a comparison with the InsightFace Swapper model. Which one do you think is better?

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

Introduction to the author

There are 5 authors in this article, from the mysterious InstantX team (not much information can be found online).

But the first one is Qixun Wang from 小红书.

The corresponding author Wang Haofan is also an engineer at Xiaohongshu. He is engaged in research on controllable and conditional content generation (AIGC) and is a CMU’20 alumnus.

Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles

The above is the detailed content of Yang Mi and Taylor’s mixed styles: Xiaohongshu AI launches SD and ControlNet suitable styles. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete