Home > Article > Technology peripherals > LCM: New way to generate high-quality images dramatically faster
Author丨Mike Young
Translation: The language to re-create the content without changing the original meaning is Chinese, without the original sentence appearing
Review the content without changing the original meaning, the language needs to be rewritten In Chinese, the original sentence does not need to appear
Recommended | 51CTO Technology Stack (WeChat ID: blog51cto)
##Picture
Thanks to the emergence of a new technology called latent consistency model (LCM), AI will usher in a major breakthrough in converting text into images. Traditional methods such as Latent Diffusion Models (LDM) perform well in generating detailed, creative images using textual cues, but their fatal drawback is their slow speed. Generating a single image using LDM can require hundreds of steps, which is too slow for many practical applications Rewritten in Chinese: LCM changes the game by reducing the number of steps required to generate an image. Compared to LDM, which requires hundreds of steps to painstakingly generate images, LCM can produce results of similar quality in just 1 to 4 steps. To achieve this efficiency, LCM refines the pre-trained LDM into a more concise form, thereby significantly reducing the required computing resources and time. We will analyze a recent paper describing how the LDM model works The paper also introduces an innovation called LCM-LoRA, a general-purpose Stable-Diffusion Acceleration module. This module can be plugged into various Stable--Diffusion fine-tuned models without any additional training. It is a universally applicable tool that can speed up a variety of image generation tasks, making it a potential tool for leveraging AI to create images. We will also dissect this part of the paper. 1. Efficient training of LCM In the field of neural networks, there is a huge challenge, which requires huge computing power, especially when training neural networks with complex equations. hour. However, the team behind this paper successfully solved this problem using an ingenious method called refining Rewritten content: The research team’s approach is as follows: First, They used a dataset of paired text and images to train a standard latent diffusion model (LDM). Once the LDM is up and running, they use it as a mentor, generating new training data. They then used this new data to train a latent consistency model (LCM). Most intriguingly, LCMs can learn from the capabilities of LDMs without having to train from scratch using huge datasets. What really matters is the efficiency of the process. The researchers completed the training of a high-quality LCM in approximately 32 hours using only a single GPU. This is important because it is much faster and more practical than previous methods. This means that more people and projects can now create such advanced models, rather than just those with access to supercomputing resources. Figure 1. Overview of LCM-LoRABy introducing LoRA into the refining process of LCM , we significantly reduce the memory overhead of refining, which allows us to train larger models, such as SDXL and SSD-1B, with limited resources. More importantly, the LoRA parameters ("acceleration vectors") obtained by LCM-LoRA training can be directly combined with other LoRA parameters ("style vectors") obtained by fine-tuning on a dataset for a specific style. Without any training, the model obtained by the linear combination of the acceleration vector and the style vector gains the ability to generate images of a specific painting style with a minimum of sampling steps.
2. Results This study demonstrates significant progress in using AI to generate images based on latent consistency model (LCM). LCM excels at creating high-quality 512x512 images in just four steps, a significant improvement over the hundreds of steps required by traditional models such as latent diffusion models (LDM). The images boast crisp details and realistic textures, which is particularly evident in the examples below. PictureFigure 2. The paper claims: “Using latent consistency models extracted from different pre-trained diffusion models Generated images. We used LCM-LoRA-SD-V1.5 to generate 512×512 resolution images, and LCM-LoRA-SDXL and LCM-LoRA-SSD-1B to generate 1024×1024 resolution images.”
Not only can these models easily handle smaller images, they are also good at generating larger 1024x1024 images. They demonstrate an ability to scale to much larger neural network models than previously possible, demonstrating their adaptability. In the examples in the paper (such as the examples of LCM-LoRA-SD-V1.5 and LCM-LoRA-SSD-1B versions), the wide applicability of the model in various data sets and practical scenarios is clarified3. Limitations
The current version of LCM has several limitations. The most important thing is the two-stage training process: first train the LDM, and then use it to train the LCM. In future research, a more direct method of LDM training may be explored, whereby LDM may not be required. The paper mainly discusses unconditional image generation, conditional generation tasks (such as text-to-image synthesis) may require more work.
Latent Consistency Model (LCM) has taken an important step in quickly generating high-quality images. These models can produce results comparable to slower LDMs in just 1 to 4 steps, potentially revolutionizing the practical application of text-to-image models. Although there are currently some limitations, particularly in terms of the training process and the scope of the generation task, LCM marks a significant advance in practical image generation based on neural networks. The examples provided highlight the potential of these models
As mentioned in the introduction, the paper is divided into two parts . The second part discusses LCM-LoRA technology, which enables fine-tuning of pre-trained models using less memory, thereby improving efficiency
The key innovation here is the integration of LoRA parameters into LCM , thereby generating a hybrid model that combines the advantages of both. This integration is particularly useful for creating images of a specific style or responding to a specific task. If different sets of LoRA parameters are selected and combined, each fine-tuned for a unique style, the researchers create a versatile model that can generate images with a minimum of steps and no additional training.
They demonstrated this in their research through the example of combining LoRA parameters fine-tuned for specific painting styles with LCM-LoRA parameters. This combination allows the creation of 1024 × 1024 resolution images with different styles at different sampling steps (such as 2-step, 4-step, 8-step, 16-step and 32-step). The results show that these combined parameters can produce high-quality images without further training, highlighting the efficiency and versatility of the model.
One thing worth noting here is the use of the so-called " The acceleration vector" (τLCM) and the "style vector" (τ) are combined using specific mathematical formulas (λ1 and λ2 are adjustable factors in these formulas). This combination results in a model that can quickly generate custom-styled images.
Figure 3 in the paper (shown below) demonstrates the effectiveness of this approach by showing the results of combining specific style LoRA parameters with LCM-LoRA parameters. This demonstrates the model's ability to generate images with different styles quickly and efficiently.
Figure 3
In general, this article This section highlights the versatility and efficiency of the LCM-LoRA model, which can be used to quickly generate high-quality, style-specific images while using very few computational resources. The technology has a wide range of applications and is expected to revolutionize the way images are generated in everything from digital art to automated content creation
We studied a A new method, latent consistency model (LCM), is used to speed up the process of generating images from text. Unlike traditional latent diffusion models (LDM), LCM can generate images of similar quality in just 1 to 4 steps instead of hundreds of steps. This significant efficiency improvement is achieved through the refinement method, that is, using pre-trained LDM to train LCM, thus avoiding a large amount of computation
In addition, we also studied LCM-LoRA , an augmentation technique that uses low-rank adaptation (LoRA) to fine-tune pre-trained models to reduce memory requirements. This ensemble method can create specific styles of images with minimal computational steps without requiring additional training
Highlighted key results include LCM in just a few steps Creating high-quality 512x512 and 1024x1024 images requires hundreds of steps with LDM. However, the current limitation is that LDM relies on a two-step training process, so you still need LDM to get started! Future research may simplify this process.
LCM is a very clever innovation especially when combined with LoRA in the proposed LCM-LoRA model. They offer the advantage of creating high-quality images more quickly and efficiently, and I think they have broad application prospects in digital content creation.
Reference link: https://notes.aimodels.fyi/lcm-lora-a-new-method-for-generating-high-quality-images-much-faster/
The above is the detailed content of LCM: New way to generate high-quality images dramatically faster. For more information, please follow other related articles on the PHP Chinese website!