I am used to Stable Diffusion, and now I finally have a Matryoshka-style Diffusion model, made by Apple.
#In the era of generative AI, diffusion models have become a popular tool for generative AI applications such as image, video, 3D, audio, and text generation. However, extending diffusion models to high-resolution domains still faces significant challenges because the model must recode all high-resolution inputs at each step. Solving these challenges requires the use of deep architectures with attention blocks, which makes optimization more difficult and consumes more computing power and memory. How to do it? Some recent work has focused on investigating efficient network architectures for high-resolution images. However, none of the existing methods have demonstrated results beyond 512×512 resolution, and the generation quality lags behind mainstream cascade or latent methods. We take OpenAI DALL-E 2, Google IMAGEN and NVIDIA eDiffI as examples. They save computation by learning a low-resolution model and multiple super-resolution diffusion models. force, where each component is trained individually. On the other hand, the latent diffusion model (LDM) only learns a low-resolution diffusion model and relies on a separately trained high-resolution autoencoder. For both solutions, multi-stage pipelines complicate training and inference, often requiring careful tuning or hyperparameters. In this article, researchers propose the Matryoshka Diffusion Models (MDM), which is a new diffusion model for end-to-end high-resolution image generation. Model. The code will be released soon.
Paper address: https://arxiv.org/pdf/2310.15111.pdfThe research proposed The main idea is to use the low-resolution diffusion process as part of the high-resolution generation by performing a joint diffusion process at multiple resolutions using a nested UNet architecture. The study found that: MDM and nested UNet architecture together achieve 1) multi-resolution loss: greatly improving the convergence speed of high-resolution input denoising; 2) Efficient progressive training plan, starting from training a low-resolution diffusion model and gradually adding high-resolution inputs and outputs according to the plan. Experimental results show that combining multi-resolution loss with progressive training can achieve a better balance between training cost and model quality. This study evaluates MDM on class-conditional image generation as well as text-conditional image and video generation. MDM allows training high-resolution models without the use of cascades or latent diffusion. Ablation studies show that both multi-resolution loss and progressive training greatly improve training efficiency and quality. Let’s enjoy the following pictures and videos generated by MDM.
Researcher Introduction The MDM diffusion model is trained end-to-end in high resolution while leveraging hierarchically structured data formation. MDM first generalizes the standard diffusion model in diffusion space and then proposes a dedicated nested architecture and training process. First let’s look at how to generalize the standard diffusion model in the extended space. The difference from cascade or latent methods is that MDM learns a single diffusion process with a hierarchical structure by introducing multi-resolution diffusion processes in an expansion space . The details are shown in Figure 2 below.
Specifically, given a data point x ∈ R^N, the researcher defines a time-related latent variable z_t = z_t^1, . . . , z_t^R ∈ R^N_1...NR.
Researchers say that conducting diffusion modeling in extended space has the following two advantages. For one, we typically care about the full-resolution output z_t^R during inference, then all other intermediate resolutions are treated as additional latent variables z_t^r, increasing the complexity of modeling the distribution. Second, multi-resolution dependencies provide the opportunity to share weights and computations across z_t^r, thereby redistributing computation in a more efficient manner and enabling efficient training and inference. Let’s see how nested architecture (NestedUNet) works. Similar to typical diffusion models, researchers use a UNet network structure to implement MDM, where residual connections and computational blocks are used in parallel to preserve fine-grained input information. The computational block here contains multiple layers of convolution and self-attention layers. The codes for NestedUNet and standard UNet are as follows.
In addition to its simplicity compared to other hierarchical methods, NestedUNet allows calculations to be distributed in the most efficient way. As shown in Figure 3 below, early exploration by researchers found that MDM achieves significantly better scalability when allocating most parameters and calculations at the lowest resolution.
Researchers use conventional denoising targets to train MDM at multiple resolutions, as shown in equation (3) below.
Progressive training is used here. The researchers directly trained MDM end-to-end according to the above formula (3) and demonstrated better convergence than the original baseline method. They found that using a simple progressive training method similar to that proposed in the GAN paper greatly accelerated the training of high-resolution models. This training method avoids costly high-resolution training from the beginning and accelerates overall convergence. Not only that, they also incorporated mixed-resolution training, which trains samples with different final resolutions simultaneously in a single batch. ##MDM is a general technology that can be gradually Any problems compressing input dimensions. A comparison of MDM with the baseline approach is shown in Figure 4 below.
Table 1 gives the comparison results on ImageNet (FID-50K) and COCO (FID-30K).
Figures 5, 6 and 7 below show the performance of MDM in image generation (Figure 5), text to image (Figure 6) and text to video (Figure 7) result. Despite being trained on a relatively small dataset, MDM shows strong zero-shot ability to generate high-resolution images and videos.
Interested readers can read the original text of the paper to learn more about the research content.
The above is the detailed content of Apple's large Vincent picture model unveiled: Russian matryoshka-like spread, supporting 1024x1024 resolution. For more information, please follow other related articles on the PHP Chinese website!