


In the past ten years or so, the rapid development of AI has mainly been due to progress in engineering practice. AI theory has not played a role in guiding algorithm development. The empirically designed neural network is still a black box.
With the popularity of ChatGPT, the capabilities of AI have been constantly exaggerated and hyped, even to the point of threatening and kidnapping society. It is urgent to make the Transformer architecture design transparent!
Recently, Professor Ma Yi’s team released the latest research results and designed a white-box Transformer model CRATE that can be completely explained by mathematics. And achieved performance close to ViT on the real-world data set ImageNet-1K.
Code link: https://github.com/Ma-Lab-Berkeley/CRATE
Paper link: https://arxiv.org/abs/2306.01129
In this paper, the researchers believe that the goal of representation learning is to compress and transform data ( For example, the distribution of token set) to support the mixing of low-dimensional Gaussian distributions on incoherent subspaces. The quality of the final representation can be measured by the unified objective function of sparse rate reduction.
From this perspective, popular deep network models such as Transformer can be naturally considered as realizing iterative schemes to gradually optimize this goal.
In particular, the results show that the standard Transformer block can be derived from an alternating optimization of complementary parts of this objective: the multi-head self-attention operator can be viewed as minimizing the The gradient descent step reduces the coding rate to compress the token set, and the subsequent multi-layer perceptron can be thought of as trying to sparse the token representation.
This discovery also prompted the design of a series of white-box Transformer-like deep network architectures that are fully interpretable mathematically. Although simple in design, experimental results show that these networks are indeed Learned to optimize design goals: compress and sparsify representations of large-scale real-world visual datasets such as ImageNet, and achieve performance close to highly engineered Transformer models (ViT).
Turing Award winner Yann LeCun also agreed with Professor Ma Yi’s work and believed that Transformer uses a method similar to LISTA (Learned Iterative Shrinkage and Thresholding Algorithm) to incrementally optimize sparse compression.
Professor Ma Yi received a double bachelor's degree in automation and applied mathematics from Tsinghua University in 1995, and a master's degree in EECS from the University of California, Berkeley in 1997 , received a master's degree in mathematics and a doctorate in EECS in 2000.
#In 2018, Professor Ma Yi joined the Department of Electrical Engineering and Computer Science at the University of California, Berkeley. In January this year, he joined the University of Hong Kong as the Data Science Research Institute. Director, and recently took over as Head of the Department of Computing at the University of Hong Kong.
The main research directions are 3D computer vision, low-dimensional models of high-dimensional data, scalability optimization and machine learning. Recent research topics include large-scale 3D geometric reconstruction and interaction and The relationship between low-dimensional models and deep networks.
Let Transformer become a white box
The main purpose of this paper is to use a more unified framework to design a network structure similar to Transformer, so as to achieve mathematical reliability Interpretable and good practical performance.
To this end, the researchers proposed to learn a sequence of incremental mappings to obtain the minimal compression and sparsest representation of the input data (token set) and optimize a unified The objective function is to reduce the sparsity rate.
This framework unifies "Transformer model and self-attention", "diffusion model and noise reduction", "structured search and rate reduction" (Structure-seeking models and rate reduction) and show that Transformer-like deep network layers can be naturally derived from unrolling iterative optimization schemes to incrementally optimize sparsity rate reduction goals.
Mapped target
Self-Attention via Denoising Tokens Towards Multiple Subspaces
The researchers used an idealized token distribution model to show that if it iterates toward a low-dimensional subspace series Noise, the relevant scoring function will take on an explicit form similar to the self-attention operator in Transformer.
Self-Attention via Compressing Token Sets through Optimizing Rate Reduction
Researchers derived the multi-head self-attention layer is an unfolded gradient descent step to minimize the lossy coding rate portion of the rate reduction, thus demonstrating an alternative way of interpreting self-attention layers as compressed token representations.
MLP via Iterative Shrinkage-Thresholding Algorithms (ISTA) for Sparse Coding
The researchers demonstrated that in the Transformer block The multi-layer perceptron immediately following the multi-head self-attention layer can be interpreted as (and can be replaced by) a layer that gradually optimizes the sparsity rate reduction target remainder by constructing a token representation sparse encoding.
CRATE
Based on the above understanding, the researchers created a new white-box Transformer architecture CRATE (Coding RAte reduction TransformEr) to learn the objective function and deep learning architecture and the final learned representation are fully mathematically interpretable, where each layer performs a step of the alternating minimization algorithm to optimize the sparsity reduction goal.
It can be noticed that CRATE chooses the simplest possible build method at every stage of the build, as long as the newly built parts remain the same The conceptual role can be directly replaced and a new white box architecture obtained.
Experimental Section
The researchers’ experimental goals were not just to compete with other well-designed Transformers using the basic design, but also to:
1. Unlike empirically designed black-box networks that are usually only evaluated on end-to-end performance, white-box designed networks can look inside the deep architecture and verify the layers of the learned network Whether it actually performs its design goal, which is to perform incremental optimization on the target.
2. Although the CRATE architecture is simple, the experimental results should verify the huge potential of this architecture, that is, it can be used on large-scale real-world data sets and tasks Achieve performance that matches highly engineered Transformer models.
Model architecture
By changing the token dimension, number of heads and number of layers, study We created four CRATE models of different sizes, denoted as CRATE-Tiny, CRATE-Small, CRATE-Base, and CRATE-Large
Datasets and Optimizations
This article mainly considers ImageNet-1K as the test platform, and uses the Lion optimizer to train CRATE models with different model sizes.
The transfer learning performance of CRATE was also evaluated: the model trained on ImageNet-1K was used as a pre-training model, and then the model was trained on several commonly used downstream data sets (CIFAR10/100, Oxford Flowers, Oxford-IIT-Pets) to fine-tune CRATE.
#Does CRATE’s layer achieve its design goals?
As the layer index increases, you can see that the CRATE-Small model has both compression and sparsification terms in most cases. Improved, the increase in the sparsity measure of the last layer is due to the additional linear layer used for classification.
The results show that CRATE is very consistent with the original design goal: once it is learned, it basically learns to compress and sparse the representation gradually through its layers.
After measuring the compression and sparsification terms on CRATE models of other sizes and intermediate model checkpoints, it can be found that the experimental results are still very consistent, with Models with more layers tend to optimize goals more effectively, validating previous understandings of the role of each layer.
Performance comparison
By measuring the highest accuracy on ImageNet-1K and the The empirical performance of the proposed network is studied through transfer learning performance on several widely used downstream datasets.
Since the designed architecture utilizes parameter sharing in both the attention block (MSSA) and the MLP block (ISTA), the CRATE-Base model (22.08 million) has a similar number of parameters to ViT-Small (22.05 million).
It can be seen that when the number of model parameters is similar, the network proposed in the article achieves ImageNet-1K and transfer learning performance similar to ViT, but the design of CRATE is simpler, Strong interpretability.
In addition, under the same training hyperparameters, CRATE can continue to expand, that is, continuously improve performance by expanding the scale of the model, while directly expanding the scale of ViT on ImageNet-1K and Does not always lead to consistent performance improvements.
That is to say, the CRATE network, despite its simplicity, can already learn the required compression and sparse representation on large-scale real-world datasets and perform well on various tasks such as classification and transfer learning) to achieve comparable performance to more engineered Transformer networks (such as ViT).
The above is the detailed content of LeCun supports it. Professor Ma Yi's five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.. For more information, please follow other related articles on the PHP Chinese website!

1 前言在发布DALL·E的15个月后,OpenAI在今年春天带了续作DALL·E 2,以其更加惊艳的效果和丰富的可玩性迅速占领了各大AI社区的头条。近年来,随着生成对抗网络(GAN)、变分自编码器(VAE)、扩散模型(Diffusion models)的出现,深度学习已向世人展现其强大的图像生成能力;加上GPT-3、BERT等NLP模型的成功,人类正逐步打破文本和图像的信息界限。在DALL·E 2中,只需输入简单的文本(prompt),它就可以生成多张1024*1024的高清图像。这些图像甚至

“Making large models smaller”这是很多语言模型研究人员的学术追求,针对大模型昂贵的环境和训练成本,陈丹琦在智源大会青源学术年会上做了题为“Making large models smaller”的特邀报告。报告中重点提及了基于记忆增强的TRIME算法和基于粗细粒度联合剪枝和逐层蒸馏的CofiPruning算法。前者能够在不改变模型结构的基础上兼顾语言模型困惑度和检索速度方面的优势;而后者可以在保证下游任务准确度的同时实现更快的处理速度,具有更小的模型结构。陈丹琦 普

Wav2vec 2.0 [1],HuBERT [2] 和 WavLM [3] 等语音预训练模型,通过在多达上万小时的无标注语音数据(如 Libri-light )上的自监督学习,显著提升了自动语音识别(Automatic Speech Recognition, ASR),语音合成(Text-to-speech, TTS)和语音转换(Voice Conversation,VC)等语音下游任务的性能。然而这些模型都没有公开的中文版本,不便于应用在中文语音研究场景。 WenetSpeech [4] 是

由于复杂的注意力机制和模型设计,大多数现有的视觉 Transformer(ViT)在现实的工业部署场景中不能像卷积神经网络(CNN)那样高效地执行。这就带来了一个问题:视觉神经网络能否像 CNN 一样快速推断并像 ViT 一样强大?近期一些工作试图设计 CNN-Transformer 混合架构来解决这个问题,但这些工作的整体性能远不能令人满意。基于此,来自字节跳动的研究者提出了一种能在现实工业场景中有效部署的下一代视觉 Transformer——Next-ViT。从延迟 / 准确性权衡的角度看,

3月27号,Stability AI的创始人兼首席执行官Emad Mostaque在一条推文中宣布,Stable Diffusion XL 现已可用于公开测试。以下是一些事项:“XL”不是这个新的AI模型的官方名称。一旦发布稳定性AI公司的官方公告,名称将会更改。与先前版本相比,图像质量有所提高与先前版本相比,图像生成速度大大加快。示例图像让我们看看新旧AI模型在结果上的差异。Prompt: Luxury sports car with aerodynamic curves, shot in a

人工智能就是一个「拼财力」的行业,如果没有高性能计算设备,别说开发基础模型,就连微调模型都做不到。但如果只靠拼硬件,单靠当前计算性能的发展速度,迟早有一天无法满足日益膨胀的需求,所以还需要配套的软件来协调统筹计算能力,这时候就需要用到「智能计算」技术。最近,来自之江实验室、中国工程院、国防科技大学、浙江大学等多达十二个国内外研究机构共同发表了一篇论文,首次对智能计算领域进行了全面的调研,涵盖了理论基础、智能与计算的技术融合、重要应用、挑战和未来前景。论文链接:https://spj.scien

译者 | 李睿审校 | 孙淑娟近年来, Transformer 机器学习模型已经成为深度学习和深度神经网络技术进步的主要亮点之一。它主要用于自然语言处理中的高级应用。谷歌正在使用它来增强其搜索引擎结果。OpenAI 使用 Transformer 创建了著名的 GPT-2和 GPT-3模型。自从2017年首次亮相以来,Transformer 架构不断发展并扩展到多种不同的变体,从语言任务扩展到其他领域。它们已被用于时间序列预测。它们是 DeepMind 的蛋白质结构预测模型 AlphaFold

说起2010年南非世界杯的最大网红,一定非「章鱼保罗」莫属!这只位于德国海洋生物中心的神奇章鱼,不仅成功预测了德国队全部七场比赛的结果,还顺利地选出了最终的总冠军西班牙队。不幸的是,保罗已经永远地离开了我们,但它的「遗产」却在人们预测足球比赛结果的尝试中持续存在。在艾伦图灵研究所(The Alan Turing Institute),随着2022年卡塔尔世界杯的持续进行,三位研究员Nick Barlow、Jack Roberts和Ryan Chan决定用一种AI算法预测今年的冠军归属。预测模型图


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Zend Studio 13.0.1
Powerful PHP integrated development environment
