search
HomeTechnology peripheralsAIAccelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

Recently, Diffusion Model has made significant progress in the field of image generation, bringing unprecedented development opportunities to image generation and video generation tasks. Despite the impressive results, the multi-step iterative denoising properties inherent in the inference process of diffusion models result in high computational costs. Recently, a series of diffusion model distillation algorithms have emerged to accelerate the inference process of diffusion models. These methods can be roughly divided into two categories: i) trajectory-preserving distillation; ii) trajectory reconstruction distillation. However, these two types of methods are limited by the limited effect ceiling or changes in the output domain.

In order to solve these problems, the ByteDance technical team proposed a trajectory segmentation consistency model called Hyper-SD. Hyper-SD's open source has also been recognized by Huggingface CEO Clem Delangue.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

This model is a novel diffusion model distillation framework that combines the advantages of trajectory-preserving distillation and trajectory reconstruction distillation, while compressing the number of denoising steps. while maintaining near-lossless performance. Compared with existing diffusion model acceleration algorithms, this method achieves excellent acceleration results. Verified by extensive experiments and user reviews, Hyper-SD can achieve SOTA-level image generation performance in 1 to 8 steps of generation on both SDXL and SD1.5 architectures.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

  • Project homepage: https://hyper-sd.github.io/

  • Paper link : https://arxiv.org/abs/2404.13686

  • Huggingface Link: https://huggingface.co/ByteDance/Hyper-SD

  • Single-step generated Demo link: https://huggingface.co/spaces/ByteDance/Hyper-SDXL-1Step-T2I

  • Real-time drawing board Demo link: https://huggingface. co/spaces/ByteDance/Hyper-SD15-ScribbleAccelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

##Introduction
Existing distillation methods for diffusion model acceleration can be roughly divided into two categories: trajectory-preserving distillation and trajectory reconstruction distillation. The trajectory-preserving distillation technique aims to maintain the original trajectory of the ordinary differential equation (ODE) corresponding to diffusion. The principle is to reduce inference steps by forcing the distilled model and the original model to produce similar outputs. However, it should be noted that although acceleration can be achieved, such methods may lead to a decrease in generation quality due to limited model capacity and inevitable errors in the training and fitting process. In contrast, trajectory reconstruction methods directly use the endpoints on the trajectory or real images as the main source of supervision, ignoring the intermediate steps of the trajectory, and can reduce the number of inference steps by reconstructing more effective trajectories and perform in a limited time. Explore the potential of your model within steps, freeing it from the constraints of the original trajectory. However, this often results in the output domain of the accelerated model being inconsistent with the original model, resulting in suboptimal results.

This paper proposes a trajectory segmentation consistency model (Hyper-SD for short) that combines the advantages of trajectory preservation and reconstruction strategies. Specifically, the algorithm first introduces trajectory segmentation consistency distillation to enforce consistency within each segment and gradually reduces the number of segments to achieve full-time consistency. This strategy solves the problem of suboptimal performance of consistent models due to insufficient model fitting capabilities and accumulation of inference errors. Subsequently, the algorithm uses human feedback learning (RLHF) to improve the model generation effect to make up for the loss of model generation effect during the acceleration process and make it better adapted to low-step reasoning. Finally, the algorithm uses fractional distillation to enhance one-step generation performance and achieves an idealized full-time-step consistent diffusion model through unified LORA, achieving excellent results in generation effects.

Method

1. Trajectory segmentation consistency distillation

Consistency Distillation (CD) [24] and Consistency Trajectory Model (CTM) [4] both aim to transform the diffusion model into a consistency model for the entire time step range [0, T] through one-shot distillation. However, these distillation models often fail to achieve optimality due to limitations in model fitting capabilities. Inspired by the soft consistency objective introduced in CTM, we refine the training process by dividing the entire time step range [0, T] into k segments and performing piecewise consistent model distillation step by step.

In the first stage, we set k=8 and use the original diffusion model to initialize Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source and Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source. The starting time step Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source is uniformly randomly sampled from Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source. We then sample the end time step Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source, where Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source is calculated as follows:

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

The training loss is calculated as follows:

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

Where Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source is calculated by formula 3, Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source represents the exponential moving average (EMA) of the student model.

Subsequently, we restore the model weights from the previous stage and continue trainingAccelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source, gradually reducing k to [4,2,1]. It is worth noting that k=1 corresponds to the standard CTM training scheme. For the distance metric d, we employ a mixture of adversarial loss and mean squared error (MSE) loss. In experiments, we observed that the MSE loss is more effective when the predicted and target values ​​are close (e.g., for k=8, 4), while the adversarial loss increases as the difference between the predicted and target values ​​increases. becomes more precise (for example, for k=2, 1). Therefore, we dynamically increase the weight of the adversarial loss and decrease the weight of the MSE loss throughout the training phase. In addition, we also integrate a noise perturbation mechanism to enhance training stability. Take the two-stage Trajectory Segment Consensus Distillation (TSCD) process as an example. As shown in the figure below, we perform independent consistency distillation in the Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source and Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source time periods in the first stage, and then perform global consistency trajectory distillation based on the consistency distillation results of the previous two periods.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

The complete algorithm process is as follows:

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

2. Human feedback learning

In addition to distillation, we further incorporate feedback learning to improve the performance of the accelerated diffusion model. Specifically, we improve the generation quality of accelerated models by leveraging human aesthetic preferences and feedback from existing visual perception models. For aesthetic feedback, we utilize the LAION aesthetic predictor and the aesthetic preference reward model provided in ImageReward to guide the model to generate more aesthetic images, as follows:

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

Where Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source is the aesthetic reward model, including the aesthetic predictor of the LAION dataset and ImageReward model, c is the text prompt, Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source is used together with the ReLU function as the hinge loss. In addition to feedback from aesthetic preferences, we note that existing visual perception models embedding rich prior knowledge about images can also serve as good feedback providers. Empirically, we find that instance segmentation models can guide the model to generate well-structured objects. Specifically, we first diffuse the noise on image Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source in the latent space to Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source, after which, similar to ImageReward, we perform iterative denoising until a specific time step Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source and predict Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source directly. We then leverage a perceptual instance segmentation model to evaluate the performance of structure generation by examining the difference between instance segmentation annotations for real images and instance segmentation predictions for denoised images, as follows:

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

Where Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source is the instance segmentation model (such as SOLO). Instance segmentation models can more accurately capture the structural defects of generated images and provide more targeted feedback signals. It is worth noting that in addition to instance segmentation models, other perceptual models are also applicable. These perceptual models can serve as complementary feedback to subjective aesthetics, focusing more on objective generative quality. Therefore, the diffusion model we use to optimize the feedback signal can be defined as:

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

3. One-step generation of enhanced

due to consistency loss Inherent limitations, one-step generation within the consistency model framework are not ideal. As analyzed in CM, the consistent distillation model shows excellent accuracy in guiding the trajectory endpoint Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source at position Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source. Therefore, fractional distillation is a suitable and effective method to further improve the one-step generation effect of our TSCD model. Specifically, we advance further generation through an optimized distribution matching distillation (DMD) technique. DMD enhances the model's output by utilizing two different scoring functions: distribution Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source from the teacher model and Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source from the fake model. We combine mean squared error (MSE) loss with score-based distillation to improve training stability. In this process, the aforementioned human feedback learning techniques are also integrated to fine-tune our model to effectively generate images with high fidelity.

By integrating these strategies, our method can not only achieve excellent low-step inference results on both SD1.5 and SDXL (and without Classifier-Guidance), but also achieve an ideal global consistency model without targeting Each specific number of steps trains UNet or LoRA to implement a unified low-step reasoning model.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

Experiment

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

On SD1.5 and SDXL and the current existing A quantitative comparison of acceleration algorithms shows that Hyper-SD is significantly better than the current state-of-the-art methods

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

In addition, Hyper-SD can use one model to implement various Different from low-step inference, the above quantitative indicators also show the effect of our method when using unified model inference.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

The visualization of the acceleration effect on SD1.5 and SDXL intuitively demonstrates the superiority of Hyper-SD in accelerating diffusion model inference sex.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

A large number of User-Study also shows the superiority of Hyper-SD compared to various existing acceleration algorithms.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

The accelerated LoRA trained by Hyper-SD is well compatible with different styles of Vincent figure base models.

Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source

At the same time, Hyper-SD’s LoRA can also adapt to the existing ControlNet to achieve high-quality controllable image generation at a low number of steps.

Summary

The paper proposes Hyper-SD, a unified diffusion model acceleration framework that can significantly improve the generation ability of diffusion models in low-step situations. , realizing new SOTA performance based on SDXL and SD15. This method uses trajectory segmentation consistency distillation to enhance the trajectory preservation capability during the distillation process and achieve a generation effect close to the original model. Then, the potential of the model at extremely low step counts is improved by further leveraging human feedback learning and variational fractional distillation, resulting in more optimized and efficient model generation. The paper also open sourced the Lora plug-in for SDXL and SD15 from 1 to 8 steps inference, as well as a dedicated one-step SDXL model, aiming to further promote the development of the generative AI community.

The above is the detailed content of Accelerate diffusion model, generate SOTA-level images in the fastest 1 step, Byte Hyper-SD is open source. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
字节跳动旗下视频编辑 App CapCut 全球用户总支出超 1 亿美元字节跳动旗下视频编辑 App CapCut 全球用户总支出超 1 亿美元Sep 14, 2023 pm 09:41 PM

字节跳动旗下的创意视频剪辑工具CapCut在中国、美国和东南亚拥有大量用户。该工具支持安卓、iOS和PC平台市场调研机构data.ai最新报告指出,截至2023年9月11日,CapCut在iOS和GooglePlay上的用户总支出已突破1亿美元(本站备注:当前约7.28亿元人民币),成功超越Splice(2022年下半年排名第一)成为2023年上半年全球最吸金的视频剪辑应用,与2022年下半年相比增长了180%。截至2023年8月,全球有4.9亿人通过iPhone和安卓手机使用CapCut。da

深圳字节跳动后海中心总建筑面积 7.74 万平方米完成主体结构封顶深圳字节跳动后海中心总建筑面积 7.74 万平方米完成主体结构封顶Jan 24, 2024 pm 05:27 PM

据南山区政府官方微信公众号“创新南山”透露,深圳字节跳动后海中心项目最近取得了重要进展。根据中建一局建设发展公司的消息,该项目主体结构提前3天全部完成封顶工作。这一消息意味着南山后海核心区将迎来一个新的地标建筑。深圳字节跳动后海中心项目位于南山区后海核心区,是今日头条科技有限公司在深圳市的总部办公大楼。总建筑面积为7.74万平方米,高约150米,共有地下4层和地上32层。据悉,深圳字节跳动后海中心项目将成为一座创新型超高层建筑,集办公、娱乐、餐饮等功能为一体。该项目将有助于深圳推动互联网产业的集

字节跳动模型大规模部署实战字节跳动模型大规模部署实战Apr 12, 2023 pm 08:31 PM

一. 背景介绍在字节跳动,基于深度学习的应用遍地开花,工程师关注模型效果的同时也需要关注线上服务一致性和性能,早期这通常需要算法专家和工程专家分工合作并紧密配合来完成,这种模式存在比较高的 diff 排查验证等成本。随着 PyTorch/TensorFlow 框架的流行,深度学习模型训练和在线推理完成了统一,开发者仅需要关注具体算法逻辑,调用框架的 Python API 完成训练验证过程即可,之后模型可以很方便的序列化导出,并由统一的高性能 C++ 引擎完成推理工作。提升了开发者训练到部署的体验

NUS和字节跨界合作,通过模型优化实现训练提速72倍,并荣获AAAI2023杰出论文。NUS和字节跨界合作,通过模型优化实现训练提速72倍,并荣获AAAI2023杰出论文。May 06, 2023 pm 10:46 PM

近日,人工智能国际顶会AAAI2023公布评选结果。新加坡国立大学(NUS)与字节跳动机器学习团队(AML)合作的CowClip技术论文入围杰出论文(DistinguishedPapers)。CowClip是一项模型训练优化策略,可以在保证模型精度的前提下,实现在单张GPU上的模型训练速度提升72倍,相关代码现已开源。​论文地址:https://arxiv.org/abs/2204.06240​开源地址:https://github.com/bytedance/LargeBatchCTR​AAA

字节跳动拓展全球研发中心,派遣工程师加拿大和澳大利亚等地字节跳动拓展全球研发中心,派遣工程师加拿大和澳大利亚等地Jan 18, 2024 pm 04:00 PM

IT之家1月18日消息,针对近日TikTok国内员工转岗海外的传言,据接近字节跳动的人士透露,该公司正在加拿大、澳大利亚等地筹建研发中心。目前,部分研发中心已试运营半年左右,未来将支持TikTok、CapCut、Lemon8等多个海外业务研发。字节跳动计划以当地招聘为主,并辅助少量外派的方式筹建相关研发中心。据了解,过去半年,该公司已从美国、中国、新加坡等地选派少量工程师参与筹建。其中,从中国向两地研发中心累计派出包括产品、研发和运营岗位120人。相关人士表示,此举是为了应对海外业务的发展,更好

PICO 4 销量远远低于预期,消息称字节跳动将取消下一代 VR 头显 PICO 5PICO 4 销量远远低于预期,消息称字节跳动将取消下一代 VR 头显 PICO 5Dec 15, 2023 am 09:34 AM

本站12月13日消息,据TheInformation,字节跳动准备砍掉其PICO新一代VR头显PICO5,因为现款PICO4的销量远远低于预期。根据EqualOcean在今年10月的一篇文章,据称字节跳动将逐步关闭PICO,并放弃元宇宙领域。文章指出,字节跳动认为PICO所处的硬件领域并非其专长,几年来的成绩未达到预期,并且对未来缺乏希望在当时,字节跳动的相关负责人对于关于“逐步放弃PICO业务”的传闻进行了回应,称这一消息是不实的。他们表示PICO业务仍在正常运营,并且公司将会长期投入扩展现实

Pico疑似即将发布全新VR头显Pico 4S,硬件升级引期待Pico疑似即将发布全新VR头显Pico 4S,硬件升级引期待Mar 16, 2024 pm 08:49 PM

近期,科技圈再次掀起了一股虚拟现实(VR)的热潮。据称,字节跳动旗下的VR子公司Pico即将推出全新的独立VR头显——Pico4S。一位名为@Lunayian的用户在社交媒体上发布了一张3D模型图片,声称该图片来自PicoConnectPC客户端,展示了Pico4S的右控制器设计。这款控制器的外观与去年9月在网络上泄露的"Pico5"控制器非常相似,但与Pico4的控制器有一些明显的差异,主要体现在取消了定位环。这一设计调整可能预示着Pico4S将带来全新的用户体验和交互方式。据了解,Pico在

抖音子公司推出基于云雀模型的 AI 机器人“豆包”抖音子公司推出基于云雀模型的 AI 机器人“豆包”Aug 23, 2023 am 10:53 AM

本站8月17日消息,字节跳动旗下LLM人工智能机器人“豆包”现已开始小范围邀请测试,用户可通过手机号、抖音或者AppleID登录。根据报道,据称字节跳动公司开发了一款名为"豆包"的AI工具,该工具基于云雀模型,提供聊天机器人、写作助手和英语学习助手等功能。它可以回答各种问题并进行对话,帮助人们获取信息。"豆包"支持网页Web平台、iOS和安卓平台,但在iOS平台上需要通过TestFlight进行安装官网用户协议显示,“豆包”软件及相关服务系指北京春田知韵科

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

Hot Tools

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.