search
HomeTechnology peripheralsAILeCun supports it. Professor Ma Yi's five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

In the past ten years or so, the rapid development of AI has mainly been due to progress in engineering practice. AI theory has not played a role in guiding algorithm development. The empirically designed neural network is still a black box.

With the popularity of ChatGPT, the capabilities of AI have been constantly exaggerated and hyped, even to the point of threatening and kidnapping society. It is urgent to make the Transformer architecture design transparent!

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

Recently, Professor Ma Yi’s team released the latest research results and designed a white-box Transformer model CRATE that can be completely explained by mathematics. And achieved performance close to ViT on the real-world data set ImageNet-1K.

Code link: https://github.com/Ma-Lab-Berkeley/CRATE

Paper link: https://arxiv.org/abs/2306.01129

In this paper, the researchers believe that the goal of representation learning is to compress and transform data ( For example, the distribution of token set) to support the mixing of low-dimensional Gaussian distributions on incoherent subspaces. The quality of the final representation can be measured by the unified objective function of sparse rate reduction.

From this perspective, popular deep network models such as Transformer can be naturally considered as realizing iterative schemes to gradually optimize this goal.

In particular, the results show that the standard Transformer block can be derived from an alternating optimization of complementary parts of this objective: the multi-head self-attention operator can be viewed as minimizing the The gradient descent step reduces the coding rate to compress the token set, and the subsequent multi-layer perceptron can be thought of as trying to sparse the token representation.

This discovery also prompted the design of a series of white-box Transformer-like deep network architectures that are fully interpretable mathematically. Although simple in design, experimental results show that these networks are indeed Learned to optimize design goals: compress and sparsify representations of large-scale real-world visual datasets such as ImageNet, and achieve performance close to highly engineered Transformer models (ViT).

Turing Award winner Yann LeCun also agreed with Professor Ma Yi’s work and believed that Transformer uses a method similar to LISTA (Learned Iterative Shrinkage and Thresholding Algorithm) to incrementally optimize sparse compression.

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

Professor Ma Yi received a double bachelor's degree in automation and applied mathematics from Tsinghua University in 1995, and a master's degree in EECS from the University of California, Berkeley in 1997 , received a master's degree in mathematics and a doctorate in EECS in 2000.

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

#In 2018, Professor Ma Yi joined the Department of Electrical Engineering and Computer Science at the University of California, Berkeley. In January this year, he joined the University of Hong Kong as the Data Science Research Institute. Director, and recently took over as Head of the Department of Computing at the University of Hong Kong.

The main research directions are 3D computer vision, low-dimensional models of high-dimensional data, scalability optimization and machine learning. Recent research topics include large-scale 3D geometric reconstruction and interaction and The relationship between low-dimensional models and deep networks.

Let Transformer become a white box

The main purpose of this paper is to use a more unified framework to design a network structure similar to Transformer, so as to achieve mathematical reliability Interpretable and good practical performance.

To this end, the researchers proposed to learn a sequence of incremental mappings to obtain the minimal compression and sparsest representation of the input data (token set) and optimize a unified The objective function is to reduce the sparsity rate.

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

This framework unifies "Transformer model and self-attention", "diffusion model and noise reduction", "structured search and rate reduction" (Structure-seeking models and rate reduction) and show that Transformer-like deep network layers can be naturally derived from unrolling iterative optimization schemes to incrementally optimize sparsity rate reduction goals.

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.


Mapped target

Self-Attention via Denoising Tokens Towards Multiple Subspaces

The researchers used an idealized token distribution model to show that if it iterates toward a low-dimensional subspace series Noise, the relevant scoring function will take on an explicit form similar to the self-attention operator in Transformer.

Self-Attention via Compressing Token Sets through Optimizing Rate Reduction

Researchers derived the multi-head self-attention layer is an unfolded gradient descent step to minimize the lossy coding rate portion of the rate reduction, thus demonstrating an alternative way of interpreting self-attention layers as compressed token representations.

MLP via Iterative Shrinkage-Thresholding Algorithms (ISTA) for Sparse Coding

The researchers demonstrated that in the Transformer block The multi-layer perceptron immediately following the multi-head self-attention layer can be interpreted as (and can be replaced by) a layer that gradually optimizes the sparsity rate reduction target remainder by constructing a token representation sparse encoding.

CRATE

Based on the above understanding, the researchers created a new white-box Transformer architecture CRATE (Coding RAte reduction TransformEr) to learn the objective function and deep learning architecture and the final learned representation are fully mathematically interpretable, where each layer performs a step of the alternating minimization algorithm to optimize the sparsity reduction goal.

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

It can be noticed that CRATE chooses the simplest possible build method at every stage of the build, as long as the newly built parts remain the same The conceptual role can be directly replaced and a new white box architecture obtained.

Experimental Section

The researchers’ experimental goals were not just to compete with other well-designed Transformers using the basic design, but also to:

1. Unlike empirically designed black-box networks that are usually only evaluated on end-to-end performance, white-box designed networks can look inside the deep architecture and verify the layers of the learned network Whether it actually performs its design goal, which is to perform incremental optimization on the target.

2. Although the CRATE architecture is simple, the experimental results should verify the huge potential of this architecture, that is, it can be used on large-scale real-world data sets and tasks Achieve performance that matches highly engineered Transformer models.

Model architecture

By changing the token dimension, number of heads and number of layers, study We created four CRATE models of different sizes, denoted as CRATE-Tiny, CRATE-Small, CRATE-Base, and CRATE-Large

Datasets and Optimizations

This article mainly considers ImageNet-1K as the test platform, and uses the Lion optimizer to train CRATE models with different model sizes.

The transfer learning performance of CRATE was also evaluated: the model trained on ImageNet-1K was used as a pre-training model, and then the model was trained on several commonly used downstream data sets (CIFAR10/100, Oxford Flowers, Oxford-IIT-Pets) to fine-tune CRATE.

#Does CRATE’s layer achieve its design goals?

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

As the layer index increases, you can see that the CRATE-Small model has both compression and sparsification terms in most cases. Improved, the increase in the sparsity measure of the last layer is due to the additional linear layer used for classification.

The results show that CRATE is very consistent with the original design goal: once it is learned, it basically learns to compress and sparse the representation gradually through its layers.

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

After measuring the compression and sparsification terms on CRATE models of other sizes and intermediate model checkpoints, it can be found that the experimental results are still very consistent, with Models with more layers tend to optimize goals more effectively, validating previous understandings of the role of each layer.

Performance comparison

By measuring the highest accuracy on ImageNet-1K and the The empirical performance of the proposed network is studied through transfer learning performance on several widely used downstream datasets.

LeCun supports it. Professor Ma Yis five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.

Since the designed architecture utilizes parameter sharing in both the attention block (MSSA) and the MLP block (ISTA), the CRATE-Base model (22.08 million) has a similar number of parameters to ViT-Small (22.05 million).

It can be seen that when the number of model parameters is similar, the network proposed in the article achieves ImageNet-1K and transfer learning performance similar to ViT, but the design of CRATE is simpler, Strong interpretability.

In addition, under the same training hyperparameters, CRATE can continue to expand, that is, continuously improve performance by expanding the scale of the model, while directly expanding the scale of ViT on ImageNet-1K and Does not always lead to consistent performance improvements.

That is to say, the CRATE network, despite its simplicity, can already learn the required compression and sparse representation on large-scale real-world datasets and perform well on various tasks such as classification and transfer learning) to achieve comparable performance to more engineered Transformer networks (such as ViT).

The above is the detailed content of LeCun supports it. Professor Ma Yi's five-year masterpiece: a completely mathematically interpretable white-box Transformer whose performance is not inferior to ViT.. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
AI Therapists Are Here: 14 Groundbreaking Mental Health Tools You Need To KnowAI Therapists Are Here: 14 Groundbreaking Mental Health Tools You Need To KnowApr 30, 2025 am 11:17 AM

While it can’t provide the human connection and intuition of a trained therapist, research has shown that many people are comfortable sharing their worries and concerns with relatively faceless and anonymous AI bots. Whether this is always a good i

Calling AI To The Grocery AisleCalling AI To The Grocery AisleApr 30, 2025 am 11:16 AM

Artificial intelligence (AI), a technology decades in the making, is revolutionizing the food retail industry. From large-scale efficiency gains and cost reductions to streamlined processes across various business functions, AI's impact is undeniabl

Getting Pep Talks From Generative AI To Lift Your SpiritGetting Pep Talks From Generative AI To Lift Your SpiritApr 30, 2025 am 11:15 AM

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). In addition, for my comp

Why AI-Powered Hyper-Personalization Is A Must For All BusinessesWhy AI-Powered Hyper-Personalization Is A Must For All BusinessesApr 30, 2025 am 11:14 AM

Maintaining a professional image requires occasional wardrobe updates. While online shopping is convenient, it lacks the certainty of in-person try-ons. My solution? AI-powered personalization. I envision an AI assistant curating clothing selecti

Forget Duolingo: Google Translate's New AI Feature Teaches LanguagesForget Duolingo: Google Translate's New AI Feature Teaches LanguagesApr 30, 2025 am 11:13 AM

Google Translate adds language learning function According to Android Authority, app expert AssembleDebug has found that the latest version of the Google Translate app contains a new "practice" mode of testing code designed to help users improve their language skills through personalized activities. This feature is currently invisible to users, but AssembleDebug is able to partially activate it and view some of its new user interface elements. When activated, the feature adds a new Graduation Cap icon at the bottom of the screen marked with a "Beta" badge indicating that the "Practice" feature will be released initially in experimental form. The related pop-up prompt shows "Practice the activities tailored for you!", which means Google will generate customized

They're Making TCP/IP For AI, And It's Called NANDAThey're Making TCP/IP For AI, And It's Called NANDAApr 30, 2025 am 11:12 AM

MIT researchers are developing NANDA, a groundbreaking web protocol designed for AI agents. Short for Networked Agents and Decentralized AI, NANDA builds upon Anthropic's Model Context Protocol (MCP) by adding internet capabilities, enabling AI agen

The Prompt: Deepfake Detection Is A Booming BusinessThe Prompt: Deepfake Detection Is A Booming BusinessApr 30, 2025 am 11:11 AM

Meta's Latest Venture: An AI App to Rival ChatGPT Meta, the parent company of Facebook, Instagram, WhatsApp, and Threads, is launching a new AI-powered application. This standalone app, Meta AI, aims to compete directly with OpenAI's ChatGPT. Lever

The Next Two Years In AI Cybersecurity For Business LeadersThe Next Two Years In AI Cybersecurity For Business LeadersApr 30, 2025 am 11:10 AM

Navigating the Rising Tide of AI Cyber Attacks Recently, Jason Clinton, CISO for Anthropic, underscored the emerging risks tied to non-human identities—as machine-to-machine communication proliferates, safeguarding these "identities" become

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.