


Not long after ChatGPT came out of the circle, the emergence of ControlNet quickly gained many developers and ordinary users on the English and Chinese Internet. Some users even promoted that the emergence of ControlNet brought AI creation into the era of upright walking. . It is no exaggeration to say that, including ControlNet, the T2I-Adapter, Composer, and LoRA training techniques of the same period, controllable generation, as the last high wall of AI creation, is very likely to have further breakthroughs in the foreseeable time, thus greatly Reduce the user's creation costs and improve the playability of creation. In just two weeks since ControlNet was open sourced, its official Star count has exceeded 10,000. This popularity is undoubtedly unprecedented.
At the same time, the open source community has also greatly lowered the threshold for users. For example, the Hugging Face platform provides basic model weights and general model training framework diffusers, stable-diffusion-webui A complete Demo platform has been developed, and Civitai has contributed a large number of stylized LoRA weights.
##Although webui is currently the most popular The visualization tool has quickly supported various recently launched generative models and supports many options for users to set. Because it focuses on the ease of use of the front-end interface, the code structure behind it is actually very complex and not friendly enough for developers. For example, although webui supports multiple types of loading and inference, it cannot support conversion under different frameworks, nor can it support flexible training of models. In community discussions, we discovered many pain points that have not yet been solved by existing open source code.
First of all, the code framework is not compatible. Currently popular models, such as ControlNet and T2I-Adapter, are not compatible with the mainstream Stable Diffusion training library. diffusers is not compatible, ControlNet pre-trained models cannot be used directly in the diffusers framework.
Secondly, Model loading is limited. Currently, models are saved in various formats, such as .bin, .ckpt, .pth, .satetensors, etc. , in addition to webui, the diffusers framework currently has limited support for these model formats. Considering that most LoRA models are mainly saved in safetensors, it is difficult for users to directly load LoRA models into existing models trained based on the diffusers framework.
Third, The basic model is limited. Currently, ControlNet and T2I-Adapter are trained based on Stable-Diffusion-1.5, and only The model weights under SD1.5 are open sourced. Considering specific scenarios, high-quality animation models such as anything-v4 and ChilloutMix already exist. Even if controllable information is introduced, the final generated results are still limited by the capabilities of UNet in SD1.5.
Finally, model training is limited. Currently, LoRA has been widely verified to be one of the most effective methods for style transfer and maintaining a specific image IP. 1. However, the diffusers framework currently only supports UNet's LoRA embedding and cannot support text encoder embedding, which will limit LoRA training.
After discussing with the open source community, we learned that the diffusers framework, as a general code library, is planning to adapt to the recently launched generation models at the same time; because it involves rewriting many underlying interfaces, it is still It will take some time to update. To this end, we started from the above actual problems and took the lead in proposing self-developed solutions for each problem to quickly help developers develop more easily.
Full adaptation solution from LoRA, ControlNet, T2I-Adapter to diffusers##LoRA for diffusers
This solution is to flexibly embed LoRA weights in various formats in the diffusers framework, that is, the model saved based on diffusers training. Since the training of LoRA usually freezes the base model, it can be easily embedded into existing models as pluggable modules as style or IP conditional constraints. LoRA itself is a general training technique. Its basic principle is that through low-rank decomposition, the number of parameters of the module can be greatly reduced. Currently, in image generation, it is generally used to train pluggable modules that are independent of the base model. , the actual use is to merge it with the output of the base model in the form of residuals.The first is the embedding of LoRA weights. Currently, the weights provided on the Civitai platform are mainly stored in ckpt or safetensors format, divided into the following two situations. (1) Full model (base model LoRA module) If the full model is in safetensors format, it can be converted by the following diffusers script If the full model is in ckpt format, it can be converted through the following diffusers script After the conversion is completed, the model can be loaded directly using the API of diffusers (2) LoRA only (only contains LoRA module) Currently diffusers officially cannot support loading only LoRA weights, and on the open source platform The LoRA weights are basically stored in this form. Essentially, it completes the remapping of key-value in LoRA weights and adapts it to the diffusers model. For this reason, we support this feature ourselves and provide conversion scripts. Only need to specify the model in diffusers format, and the LoRA weights stored in safetensors format. We provide an example conversion. In addition, due to its lightweight, LoRA itself can quickly complete training with small data and can be embedded into other networks. In order not to be limited to the existing LoRA weights, we support multi-module (UNet text encoder) training of LoRA in the diffusers framework and have submitted a PR in the official code base (https://github.com/huggingface/diffusers/pull/ 2479), and supports training LoRA in ColossalAI. The code is open source at: https://github.com/haofanwang/Lora-for-Diffusers ControlNet for diffuserspython ./scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path xxx.safetensors--dump_path save_dir --from_safetensors
python ./scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path xxx.ckpt--dump_path save_dir
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained (save_dir,torch_dtype=torch.float32)
pipeline = StableDiffusionPipeline.from_pretrained (model_id,torch_dtype=torch.float32)
model_path = "onePieceWanoSagaStyle_v2Offset.safetensors"
state_dict = load_file (model_path)
# the default mergering ratio is 0.75, you can manually set it
python convert_lora_safetensor_to_diffusers.py
##This solution is to support the use of ControlNet in the diffusers framework. Based on some attempts of the open source community, we provide a complete use case for ControlNet Anything-V3, supporting the replacement of the base model from the original SD1.5 to the anything-v3 model, so that ControlNet has better animation generation capabilities.
In addition, we also support ControlNet Inpainting and provide a pipeline adapted to diffusers,
and Multi-ControlNet for multi-condition control.
The code is open source at: https://github.com/haofanwang/ControlNet-for-Diffusers
T2I-Adapter for diffusers
The code is open source at: https://github.com/haofanwang/T2I-Adapter-for-Diffusers Currently, the above three adaptation solutions have been open sourced to the community, and have been officially acknowledged in ControlNet and T2I-Adapter respectively. They have also received thanks from the author of stable-diffusion-webui-colab. We are maintaining discussions with diffusers officials and will complete the integration of the above solution into the official code base in the near future. You are also welcome to try our work in advance. If you have any questions, you can directly raise an issue and we will reply as soon as possible.
The above is the detailed content of A complete set of tutorials for adapting the Diffusers framework is here! From T2I-Adapter to the popular ControlNet. For more information, please follow other related articles on the PHP Chinese website!

Curses首先出场的是 Curses[1]。CurseCurses 是一个能提供基于文本终端窗口功能的动态库,它可以: 使用整个屏幕 创建和管理一个窗口 使用 8 种不同的彩色 为程序提供鼠标支持 使用键盘上的功能键Curses 可以在任何遵循 ANSI/POSIX 标准的 Unix/Linux 系统上运行。Windows 上也可以运行,不过需要额外安装 windows-curses 库:pip install windows-curses 上面图片,就是一哥们用 Curses 写的 俄罗斯

相比大家都听过自动化生产线、自动化办公等词汇,在没有人工干预的情况下,机器可以自己完成各项任务,这大大提升了工作效率。编程世界里有各种各样的自动化脚本,来完成不同的任务。尤其Python非常适合编写自动化脚本,因为它语法简洁易懂,而且有丰富的第三方工具库。这次我们使用Python来实现几个自动化场景,或许可以用到你的工作中。1、自动化阅读网页新闻这个脚本能够实现从网页中抓取文本,然后自动化语音朗读,当你想听新闻的时候,这是个不错的选择。代码分为两大部分,第一通过爬虫抓取网页文本呢,第二通过阅读工

糟透了我承认我不是一个爱整理桌面的人,因为我觉得乱糟糟的桌面,反而容易找到文件。哈哈,可是最近桌面实在是太乱了,自己都看不下去了,几乎占满了整个屏幕。虽然一键整理桌面的软件很多,但是对于其他路径下的文件,我同样需要整理,于是我想到使用Python,完成这个需求。效果展示我一共为将文件分为9个大类,分别是图片、视频、音频、文档、压缩文件、常用格式、程序脚本、可执行程序和字体文件。# 不同文件组成的嵌套字典 file_dict = { '图片': ['jpg','png','gif','webp

长期以来,Python 社区一直在讨论如何使 Python 成为网页浏览器中流行的编程语言。然而网络浏览器实际上只支持一种编程语言:JavaScript。随着网络技术的发展,我们已经把越来越多的程序应用在网络上,如游戏、数据科学可视化以及音频和视频编辑软件。这意味着我们已经把繁重的计算带到了网络上——这并不是JavaScript的设计初衷。所有这些挑战提出了对新编程语言的需求,这种语言可以提供快速、可移植、紧凑和安全的代码执行。因此,主要的浏览器供应商致力于实现这个想法,并在2017年向世界推出

首先要说,聚类属于机器学习的无监督学习,而且也分很多种方法,比如大家熟知的有K-means。层次聚类也是聚类中的一种,也很常用。下面我先简单回顾一下K-means的基本原理,然后慢慢引出层次聚类的定义和分层步骤,这样更有助于大家理解。层次聚类和K-means有什么不同?K-means 工作原理可以简要概述为: 决定簇数(k) 从数据中随机选取 k 个点作为质心 将所有点分配到最近的聚类质心 计算新形成的簇的质心 重复步骤 3 和 4这是一个迭代过程,直到新形成的簇的质心不变,或者达到最大迭代次数

2017 年 Transformer 横空出世,由谷歌在论文《Attention is all you need》中引入。这篇论文抛弃了以往深度学习任务里面使用到的 CNN 和 RNN。这一开创性的研究颠覆了以往序列建模和 RNN 划等号的思路,如今被广泛用于 NLP。大热的 GPT、BERT 等都是基于 Transformer 构建的。Transformer 自推出以来,研究者已经提出了许多变体。但大家对 Transformer 的描述似乎都是以口头形式、图形解释等方式介绍该架构。关于 Tra

大家好,我是J哥。这个没有点数学基础是很难算出来的。但是我们有了计算机就不一样了,依靠计算机极快速的运算速度,我们利用微分的思想,加上一点简单的三角学知识,就可以实现它。好,话不多说,我们来看看它的算法原理,看图:由于待会要用pygame演示,它的坐标系是y轴向下,所以这里我们也用y向下的坐标系。算法总的思想就是根据上图,把时间t分割成足够小的片段(比如1/1000,这个时间片越小越精确),每一个片段分别构造如上三角形,计算出导弹下一个时间片走的方向(即∠a)和走的路程(即vt=|AC|),这时

Python这门语言很适合用来写些实用的小脚本,跑个自动化、爬虫、算法什么的,非常方便。这也是很多人学习Python的乐趣所在,可能只需要花个礼拜入门语法,就能用第三方库去解决实际问题。我在Github上就看到过不少Python代码的项目,几十行代码就能实现一个场景功能,非常实用。比方说仓库Python-master里就有很多不错的实用Python脚本,举几个简单例子:1. 创建二维码import pyqrcode import png from pyqrcode import QRCode


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Atom editor mac version download
The most popular open source editor

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Dreamweaver CS6
Visual web development tools
