


In the standard UNet structure, the scaling coefficient on the long skip connection is generally 1.
However, in some well-known diffusion model work, such as Imagen, Score-based generative model, and SR3, etc., they all set , and found that such a setting can effectively accelerate the training of the diffusion model.
However, Imagen and other models have limited support for skip connection. There is no specific analysis of the Scaling operation in the original paper, but it is said that this setting will help speed up the training of the diffusion model.
First of all, this kind of empirical display makes us unclear about what role this setting plays?
In addition, we don’t know whether we can only set , or can we use other constants?
Are the "status" of skip connections at different locations the same? Why use the same constant?
The author has a lot of questions about this...
Picture
Generally speaking, compared with ResNet and Transformer structures, UNet is not "deep" in actual use, and is less prone to optimizations such as gradient disappearance that are common in other "deep" neural network structures. question.
In addition, due to the particularity of the UNet structure, shallow features are connected to deep locations through long skip connections, thus further avoiding problems such as gradient disappearance.
Then think about it the other way around, if such a structure is not paid attention to, will it lead to problems such as excessive gradients and parameter (feature) oscillation due to updates?
Picture
#By visualizing the characteristics and parameters of the diffusion model task during the training process, it can be found that there is indeed instability Phenomenon.
The instability of parameters (features) affects the gradient, which in turn affects parameter updates. Ultimately this process has a greater risk of undesirable interference with performance. Therefore, we need to find ways to control this instability.
Further, for the diffusion model. The input of UNet is a noisy image. If the model is required to accurately predict the added noise, this requires the model to have strong robustness to the input against additional disturbances.
From theorem 3.1, the oscillation range of the middle layer feature (the width of the upper and lower bounds) is directly related to the sum of the squares of the scaling coefficient. Appropriate scaling coefficients help alleviate feature instability.
However, it should be noted that if the scaling coefficient is directly set to 0, the shock is indeed optimally alleviated. (Manual dog head)
But if UNet degrades to a skip-less situation, the instability problem is solved, but the representation ability is also lost. This is a trade-off between model stability and representational capabilities.
Picture
Similarly, from the perspective of parameter gradient. Theorem 3.3 also reveals that the scaling coefficient controls the magnitude of the gradient.
Picture
Further, Theorem 3.4 also reveals that scaling on the long skip connection can also affect the robustness of the model to input disturbances. bounds to improve the stability of the diffusion model to input disturbances.
Become Scaling
Through the above analysis, we understand the importance of scaling on Long skip connection for stable model training , is also applicable to the above analysis.
Next, we will analyze what kind of scaling can have better performance. After all, the above analysis can only show that scaling is good, but it cannot determine what kind of scaling is the best or better. good.
A simple way is to introduce a learnable module for long skip connection to adaptively adjust scaling. This method is called Learnable Scaling (LS) Method. We use a SENet-like structure, as shown below (the U-ViT structure considered here is very well organized, like!)
Picture
Judging from the results of this article, LS can indeed effectively stabilize the training of the diffusion model! Further, we try to visualize the coefficients learned in LS.
As shown in the figure below, we will find that these coefficients show an exponential downward trend (note that the first long skip connection here refers to the connection connecting the first and last ends of UNet), and the first coefficient is almost close to Yu 1, this phenomenon is also amazing!
Picture
Based on this series of observations (please refer to the paper for more details), we further proposed the Constant Scaling (CS) Method , that is, no learnable parameters are required:
CS strategy is the same as the original scaling operation using without additional parameters, resulting in almost no additional computational cost.
Although CS does not perform as well as LS in stable training most of the time, it is still worth a try for the existing strategies.
The implementation of the above CS and LS is very simple and only requires a few lines of code. For each (hua) formula (li) and each (hu) type (shao) UNet structure, the feature dimensions may need to be aligned. (Manual dog head 1)
Recently, some follow-up work, such as FreeU, SCEdit and other work, have also revealed the importance of scaling on skip connection. Everyone is welcome to try and promote it.
The above is the detailed content of A few lines of code stabilize UNet! Sun Yat-sen University and others proposed the ScaleLong diffusion model: from questioning Scaling to becoming Scaling. For more information, please follow other related articles on the PHP Chinese website!

Nvidia's Nemotron-Mini-4B-Instruct: A Powerful Small Language Model for On-Device AI Nvidia recently unveiled Nemotron-Mini-4B-Instruct, a compact yet capable Small Language Model (SLM) optimized for speed and on-device deployment. Derived from the

AI Agents: The Future of Human-Computer Interaction "AI agents will become the primary way we interact with computers in the future. They will be able to understand our needs and preferences, and proactively help us with tasks and decision-maki

Introduction The field of Generative AI (GenAI) has been brimming with job opportunities, much in line with its adoption by various organizations and individuals. What if I said you too could make a lot of money using GenAI, i

Generative AI: A Reddit Deep Dive into the Hype and Reality The rapid advancements in generative AI have sparked intense debate. Reddit, known for its open discussions, offers a rich landscape of opinions on this transformative technology. This artic

Introduction Within the domain of computer vision, Human Posture Estimation stands as a captivating field with applications extending from increased reality and gaming to mechanical autonomy and healthcare. This article sheds

AI Agents: Revolutionizing Social Media Content Moderation and Curation The explosion of user-generated content on social media platforms necessitates sophisticated content moderation and curation. Human moderators alone cannot handle the sheer volu

Exploring the Programming Languages Powering NASA's Space Missions Imagine the intricate code guiding spacecraft across the solar system or enabling groundbreaking Mars missions. At NASA, the software driving these achievements isn't ordinary; it's

Generative AI is bridging the gap between coding and non-technical users. Tools like LlamaCoder from Together AI let users build applications with simple prompts, minimizing the need for extensive coding knowledge. This article demonstrates building


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

WebStorm Mac version
Useful JavaScript development tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Mac version
God-level code editing software (SublimeText3)