search
HomeTechnology peripheralsAIByteDouBao's new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
In the rapid development of generative models, Image Tokenization plays a very important role, such as VAE that Diffusion relies on or VQGAN that Transformer relies on. . These Tokenizers encode the image into a more compact latent space, making it more efficient to generate high-resolution images.

However, existing Tokenizers usually map the input image to a downsampled 2D matrix in the latent space. This design implicitly limits the mapping relationship between tokens and images, making it difficult to Effectively utilize redundant information in the image (for example, adjacent areas often have similar features) to obtain a more effective image encoding.

In order to solve this problem, ByteDance Beanbao Big Model Team and Technical University of Munich proposed a new 1D image Tokenizer: TiTok. This Tokenizer breaks the design limitations of 2D Tokenizer and can compress the entire image to a more compact Token sequence.

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

  • Paper link: https://arxiv.org/abs/2406.07550
  • Project link: https://yucornetto.github.io/projects/titok.html
  • Code link: https://github.com/bytedance/1d-tokenizer

For a 256 x 256 resolution image, TiTok only needs a minimum of 32 Tokens to express it, which is 256 or 1024 Tokens than the usual 2D Tokenizer significantly reduced. For a 512 x 512 resolution image, TiTok requires at least 64 Tokens, which is 64 times smaller than Stable Diffusion’s VAE Tokenizer. In addition, on the task of ImageNet image generation, using TiTok as the Tokenizer generator has significantly improved the generation quality and generation speed.

At 256 resolution, TiTok achieved an FID of 1.97, significantly exceeding MaskGIT’s 4.21 using the same generator. At 512 resolution TiTok can achieve an FID of 2.74, which not only exceeds DiT (3.04), but also accelerates image generation by an astonishing 410 times compared to DiT! The best variant of TiTok achieved an FID of 2.13, significantly exceeding DiT while still achieving a 74x acceleration.

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

                                                                                                                                                        ​                                                                                                                                                                                                                                                                           with tokens required for images to result in significantly faster generation speeds , but while maintaining high-quality image generation.

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

Model structure

The structure of TiTok is very simple. The encoder and decoder parts are each a ViT. During the encoding process, a set of latent tokens will be spliced ​​after the image patches. After passing through the encoder, only the latent tokens are retained and the quantization process is performed. The obtained quantized latent tokens will be spliced ​​together with a set of mask tokens and sent to the decoder to reconstruct the image from the mask token sequence.
Study on the properties of 1D Tokenization

The researchers conducted a series of experimental studies on different numbers of tokens used to represent images, different tokenizer sizes, reconstruction performance, generation performance, linear probing accuracy, and training and Comparison of reasoning speed. During this process, the researchers found that (1) only 32 Tokens can achieve good reconstruction and generation effects (2) By increasing the model size of Tokenizer, researchers can use fewer Tokens to represent images ( 3) When fewer Tokens are used to represent pictures, Tokenizer will learn stronger semantic information. (4) When fewer Tokens are used to represent pictures, training and inference speeds are significantly improved.

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

In addition, the video shows the reconstructed images using different Tokenizer sizes and the number of Tokens. It can be seen that a larger Tokenizer can reconstruct better quality images with limited Tokens. In addition, when there are only limited tokens, the model is more inclined to retain salient areas and achieve better reconstruction results.

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

Experimental verification

The researchers mainly compared with other methods at the 256 x 256 resolution and 512 x 512 resolution of ImageNet-1k. It can be seen that although TiTok uses a limited number of Tokens, it can achieve comparable reconstruction results (rFID) with other methods that use more Tokens. Using a smaller number of Tokens allows TiTok to maintain a higher generated image quality (gFID) At the same time, it has a significantly faster generation speed than other methods.

For example, TiTok-L-32 achieved a gFID score of 2.77 and can generate images at a speed of 101.6 images per second, which is significantly faster than other Diffusion Models (169 times faster than DiT) or Transformer Models (339 times faster than ViT-VQGAN).

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

TiTok’s advantage of using fewer tokens is more obvious in higher-resolution image generation, where TiTok-L-64 can be completed using only 64 tokens Reconstruction and generation of high-quality 512 resolution images. The quality of the generated images is not only higher than DiT (2.74 v.s. 3.04), but the generation speed is increased by nearly 410 times.

ByteDouBaos new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.

Conclusion

In this article, the researcher focuses on a new 1D Image Tokenizer and proposes a new Tokenizer to break the limitations of the existing 2D Tokenizer and make it more advanced Good use of redundant information in images. TiTok only needs a small number of Tokens (such as 32) to represent images, while still being able to perform high-quality image reconstruction and generation. In ImageNet's 256 resolution and 512 resolution generation experiments, TiTok not only achieved generation quality that exceeded Diffusion Models, but also achieved a hundred times faster generation speed.

About the Doubao Large Model Team

ByteDance Doubao Large Model Team was established in 2023 and is committed to developing the industry's most advanced AI large model technology and becoming a world-class research team. Contribute to technological and social development.

The Doubao Big Model team has long-term vision and determination in the field of AI. Its research directions cover NLP, CV, speech, etc., and it has laboratories and research positions in China, Singapore, the United States and other places. Relying on the platform's sufficient data, computing and other resources, the team continues to invest in related fields. It has launched a self-developed general large model to provide multi-modal capabilities. It supports 50+ businesses such as Doubao, Buttons, and Jimeng downstream, and is open to the public through the Volcano Engine. Corporate customers. At present, Doubao APP has become the AIGC application with the largest number of users in the Chinese market.

Welcome to join the Bytedance Beanbao Big Model Team, click the link below to enter the Bytedance Top Seed plan:
https://mp.weixin.qq.com/s/ZjQ-v6reZXhBP6G27cbmlQ

The above is the detailed content of ByteDouBao's new image Tokenizer: only 32 tokens are needed to generate an image, and the speed is increased by up to 410 times.. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
字节跳动旗下视频编辑 App CapCut 全球用户总支出超 1 亿美元字节跳动旗下视频编辑 App CapCut 全球用户总支出超 1 亿美元Sep 14, 2023 pm 09:41 PM

字节跳动旗下的创意视频剪辑工具CapCut在中国、美国和东南亚拥有大量用户。该工具支持安卓、iOS和PC平台市场调研机构data.ai最新报告指出,截至2023年9月11日,CapCut在iOS和GooglePlay上的用户总支出已突破1亿美元(本站备注:当前约7.28亿元人民币),成功超越Splice(2022年下半年排名第一)成为2023年上半年全球最吸金的视频剪辑应用,与2022年下半年相比增长了180%。截至2023年8月,全球有4.9亿人通过iPhone和安卓手机使用CapCut。da

深圳字节跳动后海中心总建筑面积 7.74 万平方米完成主体结构封顶深圳字节跳动后海中心总建筑面积 7.74 万平方米完成主体结构封顶Jan 24, 2024 pm 05:27 PM

据南山区政府官方微信公众号“创新南山”透露,深圳字节跳动后海中心项目最近取得了重要进展。根据中建一局建设发展公司的消息,该项目主体结构提前3天全部完成封顶工作。这一消息意味着南山后海核心区将迎来一个新的地标建筑。深圳字节跳动后海中心项目位于南山区后海核心区,是今日头条科技有限公司在深圳市的总部办公大楼。总建筑面积为7.74万平方米,高约150米,共有地下4层和地上32层。据悉,深圳字节跳动后海中心项目将成为一座创新型超高层建筑,集办公、娱乐、餐饮等功能为一体。该项目将有助于深圳推动互联网产业的集

字节跳动模型大规模部署实战字节跳动模型大规模部署实战Apr 12, 2023 pm 08:31 PM

一. 背景介绍在字节跳动,基于深度学习的应用遍地开花,工程师关注模型效果的同时也需要关注线上服务一致性和性能,早期这通常需要算法专家和工程专家分工合作并紧密配合来完成,这种模式存在比较高的 diff 排查验证等成本。随着 PyTorch/TensorFlow 框架的流行,深度学习模型训练和在线推理完成了统一,开发者仅需要关注具体算法逻辑,调用框架的 Python API 完成训练验证过程即可,之后模型可以很方便的序列化导出,并由统一的高性能 C++ 引擎完成推理工作。提升了开发者训练到部署的体验

NUS和字节跨界合作,通过模型优化实现训练提速72倍,并荣获AAAI2023杰出论文。NUS和字节跨界合作,通过模型优化实现训练提速72倍,并荣获AAAI2023杰出论文。May 06, 2023 pm 10:46 PM

近日,人工智能国际顶会AAAI2023公布评选结果。新加坡国立大学(NUS)与字节跳动机器学习团队(AML)合作的CowClip技术论文入围杰出论文(DistinguishedPapers)。CowClip是一项模型训练优化策略,可以在保证模型精度的前提下,实现在单张GPU上的模型训练速度提升72倍,相关代码现已开源。​论文地址:https://arxiv.org/abs/2204.06240​开源地址:https://github.com/bytedance/LargeBatchCTR​AAA

字节跳动拓展全球研发中心,派遣工程师加拿大和澳大利亚等地字节跳动拓展全球研发中心,派遣工程师加拿大和澳大利亚等地Jan 18, 2024 pm 04:00 PM

IT之家1月18日消息,针对近日TikTok国内员工转岗海外的传言,据接近字节跳动的人士透露,该公司正在加拿大、澳大利亚等地筹建研发中心。目前,部分研发中心已试运营半年左右,未来将支持TikTok、CapCut、Lemon8等多个海外业务研发。字节跳动计划以当地招聘为主,并辅助少量外派的方式筹建相关研发中心。据了解,过去半年,该公司已从美国、中国、新加坡等地选派少量工程师参与筹建。其中,从中国向两地研发中心累计派出包括产品、研发和运营岗位120人。相关人士表示,此举是为了应对海外业务的发展,更好

Pico疑似即将发布全新VR头显Pico 4S,硬件升级引期待Pico疑似即将发布全新VR头显Pico 4S,硬件升级引期待Mar 16, 2024 pm 08:49 PM

近期,科技圈再次掀起了一股虚拟现实(VR)的热潮。据称,字节跳动旗下的VR子公司Pico即将推出全新的独立VR头显——Pico4S。一位名为@Lunayian的用户在社交媒体上发布了一张3D模型图片,声称该图片来自PicoConnectPC客户端,展示了Pico4S的右控制器设计。这款控制器的外观与去年9月在网络上泄露的"Pico5"控制器非常相似,但与Pico4的控制器有一些明显的差异,主要体现在取消了定位环。这一设计调整可能预示着Pico4S将带来全新的用户体验和交互方式。据了解,Pico在

PICO 4 销量远远低于预期,消息称字节跳动将取消下一代 VR 头显 PICO 5PICO 4 销量远远低于预期,消息称字节跳动将取消下一代 VR 头显 PICO 5Dec 15, 2023 am 09:34 AM

本站12月13日消息,据TheInformation,字节跳动准备砍掉其PICO新一代VR头显PICO5,因为现款PICO4的销量远远低于预期。根据EqualOcean在今年10月的一篇文章,据称字节跳动将逐步关闭PICO,并放弃元宇宙领域。文章指出,字节跳动认为PICO所处的硬件领域并非其专长,几年来的成绩未达到预期,并且对未来缺乏希望在当时,字节跳动的相关负责人对于关于“逐步放弃PICO业务”的传闻进行了回应,称这一消息是不实的。他们表示PICO业务仍在正常运营,并且公司将会长期投入扩展现实

抖音子公司推出基于云雀模型的 AI 机器人“豆包”抖音子公司推出基于云雀模型的 AI 机器人“豆包”Aug 23, 2023 am 10:53 AM

本站8月17日消息,字节跳动旗下LLM人工智能机器人“豆包”现已开始小范围邀请测试,用户可通过手机号、抖音或者AppleID登录。根据报道,据称字节跳动公司开发了一款名为"豆包"的AI工具,该工具基于云雀模型,提供聊天机器人、写作助手和英语学习助手等功能。它可以回答各种问题并进行对话,帮助人们获取信息。"豆包"支持网页Web平台、iOS和安卓平台,但在iOS平台上需要通过TestFlight进行安装官网用户协议显示,“豆包”软件及相关服务系指北京春田知韵科

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Atom editor mac version download

Atom editor mac version download

The most popular open source editor