Home >Technology peripherals >AI >SDXL Turbo and LCM bring the era of real-time generation of AI drawings: as fast as typing, images are presented instantly
Stability AI launched a new generation of image synthesis model - Stable Diffusion XL Turbo on Tuesday, which has aroused enthusiastic response. Many people say that using this model for image-to-text generation has never been easier
Enter your ideas in the input box and SDXL Turbo will respond quickly and generate the corresponding content without having to Other operations. No matter you input more or less content, it will not affect its speed.
You can use the Some images are created more precisely. Just take a piece of white paper and tell SDXL Turbo that you want a white cat. Before you finish typing, the little white cat will already appear in your hands
The speed of the SDXL Turbo model has reached an almost "real-time" level, which makes people wonder: whether the image generation model can be used for other purposes
Someone directly connected to the game and got a 2fps style transfer screen:
##According to the official blog, on the A100, SDXL Turbo can Generates 512x512 image in 207 ms (on-the-fly encoding single denoising step decoding, fp16), of which a single UNet forward evaluation takes 67 ms.
In this way, we can judge that Vincent Tu has entered the "real-time" era.
Such "instant generation" efficiency looks somewhat similar to the Tsinghua LCM model that became popular not long ago, but the technical content behind them is different. Stability detailed the inner workings of the model in a research paper released at the same time. The research focuses on a technology called Adversarial Diffusion Distillation (ADD). One of the claimed advantages of SDXL Turbo is its similarity to generative adversarial networks (GANs), particularly in generating single-step image outputs.
##Paper address: https://static1.squarespace.com/static/6213c340453c3f502425776e/t/65663480a92fba51d0e1023f/1701197769659/adversarial_diffusion_distillation.pdfPaper details
To this end, the researchers introduced a combination of two training objectives: (i) adversarial loss and (ii) distillation loss corresponding to SDS. The adversarial loss forces the model to directly generate samples that lie on the real image manifold on each forward pass, avoiding blurring and other artifacts common in other distillation methods. The distillation loss uses another pretrained (and fixed) diffusion model as the teacher, effectively leveraging its extensive knowledge and retaining the strong compositionality observed in large diffusion models. During the inference process, the researchers did not use classifier-free guidance, further reducing memory requirements. They retain the model's ability to improve results through iterative refinement, an advantage over previous single-step GAN-based approaches.
The training steps are shown in Figure 2:
Table 1 shows the results of the ablation experiment, The following are the main conclusions:
# Next is a comparison with other SOTA models. Here the researchers did not use automated indicators, but chose a more reliable user preference evaluation method. The goal was to evaluate prompt compliance and overall image. To compare multiple different model variants (StyleGAN-T, OpenMUSE, IF-XL, SDXL and LCM-XL), the experiment uses the same prompt to generate the output. In blind tests, the SDXL Turbo beat the LCM-XL's 4-step configuration in a single step, and beat the SDXL's 50-step configuration in just 4 steps. From these results, it can be seen that SDXL Turbo outperforms state-of-the-art multi-step models while significantly reducing computational requirements without sacrificing image quality Presented here is a visual chart of the ELO score regarding inference speed
at In Table 2, different few-step sampling and distillation methods using the same base model are compared. The results show that the ADD method outperforms all other methods, including the 8-step standard DPM solver
as a quantitative In addition to the experimental results, the paper also shows some qualitative experimental results, demonstrating the improvement capabilities of ADD-XL based on the initial sample. Figure 3 compares ADD-XL (1 step) with the current best baseline in few-step schemes. Figure 4 describes the iterative sampling process of ADD-XL. Figure 8 provides a direct comparison of ADD-XL with its teacher model SDXL-Base. As user studies show, ADD-XL outperforms the teacher model in both quality and prompt alignment.
#For more research details, please refer to the original paper
The above is the detailed content of SDXL Turbo and LCM bring the era of real-time generation of AI drawings: as fast as typing, images are presented instantly. For more information, please follow other related articles on the PHP Chinese website!