Home  >  Article  >  Technology peripherals  >  CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

PHPz
PHPzforward
2024-03-21 16:31:25717browse

A simple sketch can be transformed into a multi-style painting with one click, and additional descriptions can be added. This was achieved in a study jointly launched by CMU and Adobe.

CMU Assistant Professor Junyan Zhu is an author of the study, and his team published a related study at the ICCV 2021 conference. This study shows how an existing GAN model can be customized with a single or a few hand-drawn sketches to generate images that match the sketch.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

  • Paper address: https://arxiv.org/pdf/2403.12036.pdf
  • GitHub address: https://github.com/GaParmar/img2img-turbo
  • Trial address: https://huggingface.co/spaces/gparmar/img2img-turbo-sketch
  • Paper title: One-Step Image Translation with Text -to-Image Models

How effective is it? We tried it out and came to the conclusion that it is very playable. The output image styles are diverse, including cinematic style, 3D models, animation, digital art, photography style, pixel art, fantasy school, neon punk and comics.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

prompt is "duck".

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

prompt is "a small house surrounded by vegetation".

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

prompt is "Chinese boys playing basketball".

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

prompt is "Muscle Man Rabbit".

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds


CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds#

In this work, researchers have made targeted improvements to the problems existing in the application of conditional diffusion models in image synthesis. Such models allow users to generate images based on spatial conditions and text prompts, with precise control over scene layout, user sketches, and human poses.

But the problem is that the iteration of the diffusion model causes the inference speed to slow down, limiting real-time applications, such as interactive Sketch2Photo. In addition, model training usually requires large-scale paired data sets, which brings huge costs to many applications and is not feasible for some other applications.

In order to solve the problems of the conditional diffusion model, researchers have introduced a general method that uses adversarial learning objectives to adapt the single-step diffusion model to new tasks and new fields. Specifically, they integrate individual modules of a vanilla latent diffusion model into a single end-to-end generator network with small trainable weights, thereby enhancing the model's ability to preserve the structure of the input image while reducing overfitting.

Researchers have launched the CycleGAN-Turbo model. In an unpaired setting, this model can outperform existing GAN and diffusion-based methods in various scene conversion tasks, such as day and night. Convert, add or remove weather effects such as fog, snow, rain.

At the same time, in order to verify the versatility of their own architecture, the researchers conducted experiments on paired settings. The results show that their model pix2pix-Turbo achieves visual effects comparable to Edge2Image and Sketch2Photo, and reduces the inference step to 1 step.

In summary, this work demonstrates that one-step pre-trained text-to-image models can serve as a powerful, versatile backbone for many downstream image generation tasks.

Method introduction

This study proposes a general method that combines a single-step diffusion model (such as SD-Turbo) with adversarial learning Adapt to new tasks and domains. This leverages the internal knowledge of the pre-trained diffusion model while enabling efficient inference (e.g., 0.29 seconds on the A6000 and 0.11 seconds on the A100 for a 512x512 image).

Additionally, the single-step conditional models CycleGAN-Turbo and pix2pix-Turbo can perform a variety of image-to-image translation tasks, suitable for both pairwise and non-pairwise settings. CycleGAN-Turbo surpasses existing GAN-based and diffusion-based methods, while pix2pix-Turbo is on par with recent work such as ControlNet for Sketch2Photo and Edge2Image, but with the advantage of single-step inference.

Add conditional input

In order to convert the text-to-image model into an image-conversion model, the first thing to do is Find an efficient way to incorporate the input image x into the model.

A common strategy for incorporating conditional inputs into Diffusion models is to introduce additional adapter branches, as shown in Figure 3.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

Specifically, this study initializes a second encoder and labels it as a condition encoder (Condition Encoder). The Control Encoder accepts the input image x and outputs feature maps of multiple resolutions to the pre-trained Stable Diffusion model through residual connections. This method achieves remarkable results in controlling diffusion models.

As shown in Figure 3, this study uses two encoders (U-Net encoder and conditional encoder) in a single-step model to process noisy images and input image encounters challenges. Unlike multi-step diffusion models, the noise map in single-step models directly controls the layout and pose of the generated image, which often contradicts the structure of the input image. Therefore, the decoder receives two sets of residual features representing different structures, which makes the training process more challenging.

Direct conditional input. Figure 3 also illustrates that the image structure generated by the pre-trained model is significantly affected by the noise map z. Based on this insight, the study recommends feeding conditional inputs directly to the network. To adapt the backbone model to new conditions, the study added several LoRA weights to various layers of U-Net (see Figure 2).

Preserve input details

Latent diffusion models (LDMs) image encoders work by spatially resolving the input image into The rate compression is 8 times while increasing the number of channels from 3 to 4 to speed up the training and inference process of the diffusion model. While this design can speed up training and inference, it may not be ideal for image conversion tasks that require preserving the details of the input image. Figure 4 illustrates this problem, where we take an input image of daytime driving (left) and convert it to a corresponding image of nighttime driving, using an architecture that does not use skip connections (center). It can be observed that fine-grained details such as text, street signs, and distant cars are not preserved. In contrast, the resulting transformed image using an architecture that includes skip connections (right) does a better job of preserving these complex details.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

To capture the fine-grained visual details of the input image, the study added skip connections between the encoder and decoder networks (see Figure 2 ). Specifically, the study extracts four intermediate activations after each downsampling block within the encoder and processes them through a 1 × 1 zero convolutional layer before feeding them into the corresponding upsampling block in the decoder. . This approach ensures that intricate details are preserved during image conversion.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

Experiment

This study combines CycleGAN-Turbo with previous GAN-based non-paired images Conversion methods were compared. From a qualitative analysis, Figure 5 and Figure 6 show that neither the GAN-based method nor the diffusion-based method can achieve a balance between output image realism and maintaining structure.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

The study also compared CycleGAN-Turbo to CycleGAN and CUT. Tables 1 and 2 present the results of quantitative comparisons on eight unpaired switching tasks.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

##CycleGAN and CUT on simpler, object-centric data On the set, such as horse → zebra (Fig. 13), it shows effective performance and achieves low FID and DINO-Structure scores. Our method slightly outperforms these methods in FID and DINO-Structure distance metrics.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

As shown in Table 1 and Figure 14, in the object-centered data set (such as horse → zebra) These methods can generate realistic zebras, but have difficulties in accurately matching object poses.

On the driving dataset, these editing methods perform significantly worse for three reasons: (1) the model has difficulty generating complex scenes containing multiple objects, (2) these methods ( Except for Instruct-pix2pix) the image needs to be inverted into a noise map first, introducing potential human error, (3) the pre-trained model cannot synthesize street view images similar to those captured by the driving dataset. Table 2 and Figure 16 show that on all four driving transition tasks, these methods output images of poor quality and do not follow the structure of the input image.

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds

The above is the detailed content of CMU Zhu Junyan and Adobe’s new work: 512x512 image inference, A100 only takes 0.11 seconds. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete