search
HomeTechnology peripheralsAIA deeper understanding of visual Transformer, analysis of visual Transformer

This article is reprinted with the authorization of the Autonomous Driving Heart public account. Please contact the source when reprinting

Write in front&&The author’s personal understanding

Currently, algorithm models based on the Transformer structure have had a great impact in the field of computer vision (CV). They surpass previous convolutional neural network (CNN) algorithm models on many basic computer vision tasks. The following is the latest LeaderBoard ranking of different basic computer vision tasks that I found. Through LeaderBoard, we can see the dominance of the Transformer algorithm model in various computer vision tasks

  • Image Classification Task

The first is LeaderBoard on ImageNet. It can be seen from the list that among the top five, each model uses the Transformer structure, while the CNN structure is only partially used, or combined with Transformer. Way.

更深层的理解视觉Transformer, 对视觉Transformer的剖析

LeaderBoard for image classification task

  • Object detection task

The next step is on COCO test-dev LeaderBoard, as can be seen from the list, more than half of the top five are based on algorithm structures such as DETR.

更深层的理解视觉Transformer, 对视觉Transformer的剖析LeaderBoard for the target detection task

  • Semantic segmentation task

The last is the LeaderBoard on the ADE20K val, which can also be viewed through the list It turns out that among the top few on the list, the Transformer structure still occupies the current main force.

更深层的理解视觉Transformer, 对视觉Transformer的剖析LeaderBoard for semantic segmentation tasks

Although Transformer has shown great development potential in China, the current computer vision community has not fully grasped the inner working principles of Vision Transformer. It also does not grasp the basis for its decision-making (output prediction results), so the need for its interpretability gradually becomes prominent. Only by understanding how such models make decisions can we improve their performance and build trust in artificial intelligence systems

The main purpose of this article is to study the different interpretability methods of Vision Transformer and compare them according to different The research motivation, structure type and application scenarios of the algorithm are classified to form a review article

Analysis of Vision Transformer

Because just mentioned, the structure of Vision Transformer It has achieved very good results in various basic computer vision tasks. So many methods have emerged in the computer vision community to enhance its interpretability. In this article, we mainly focus on classification tasks, starting from Common Attribution Methods, Attention-based Methods, Pruning-based Methods, Inherently Explainable MethodsOther Tasks Among these five aspects, the latest and classic tasks are selected for introduction. Here is the mind map that appears in the paper. You can read it in more detail based on what you are interested in~

更深层的理解视觉Transformer, 对视觉Transformer的剖析

Mind map of this article

Common Attribution Methods

The explanation of attribute-based methods usually starts with the process of how the input features of the model gradually obtain the final output results. This type of method is mainly used to measure the correlation between the model's prediction results and the input features

Among these methods, such as Grad-CAM and Integrated Gradients algorithms It is directly applied to the algorithm based on visual Transformer. Some other methods like SHAP and Layer-Wise Relevance Propagation (LRP) have been used to explore ViT-based architectures. However, due to the very high computational cost of methods such as SHAP, the recent ViT Shapely algorithm was designed to adapt to ViT-related application research.

Attention-based Methods

Vision Transformer obtains powerful feature extraction capabilities through its attention mechanism. Among attention-based interpretability methods, visualizing the attention weight results is a very effective method. This article will introduce several visualization techniques

  • Raw Attention: As the name suggests, this method is to visualize the attention weight map obtained by the middle layer of the network model, so as to analyze the effect of the model.
  • Attention Rollout: This technology tracks the transmission of information from the input token to the intermediate embedding by expanding attention weights in different layers of the network.
  • Attention Flow: This method treats the attention graph as a flow network and uses the maximum flow algorithm to calculate the maximum flow value from the intermediate embedding to the input token.
  • partialLRP: This method is proposed for visualizing the multi-head attention mechanism in Vision Transformer, and also considers the importance of each attention head.
  • Grad-SAM: This method is used to alleviate the limitations of relying solely on the original attention matrix to explain model predictions, prompting researchers to use gradients in the original attention weights.
  • Beyond Intuition: This method is also a method for explaining attention, including two stages of attention perception and reasoning feedback.

Finally, here is an attention visualization diagram of different interpretability methods. You can feel the difference between different visualization methods for yourself.

更深层的理解视觉Transformer, 对视觉Transformer的剖析

Comparison of attention maps of different visualization methods

Pruning-based Methods

Pruning is a very effective method that is widely used Used to optimize the efficiency and complexity of transformer structures. The pruning method reduces the number of parameters and computational complexity of the model by deleting redundant or useless information. Although pruning algorithms focus on improving the computational efficiency of the model, this type of algorithm can still achieve interpretability of the model.

The pruning methods based on Vision-Transformer in this article can be roughly divided into three categories: explicitly explainable (explicitly explainable), implicitly explainable (implicitly explainable) formula can be explained), possibly explainable (may be explainable).

  • Explicitly Explainable
    Among pruning-based methods, there are several types of methods that can provide simpler and more explainable models.
  • IA-RED^2: The goal of this method is to achieve an optimal balance between the computational efficiency and interpretability of the algorithm model. And in this process, the flexibility of the original ViT algorithm model is maintained.
  • X-Pruner: This method is a method for pruning salience units by creating an interpretable perceptual mask that measures the Predict the contribution in a specific class.
  • Vision DiffMask: This pruning method includes adding a gating mechanism to each ViT layer. Through the gating mechanism, the output of the model can be maintained while shielding the input. Beyond this, the algorithmic model can clearly trigger a subset of the remaining images, allowing for better understanding of the model's predictions.
  • Implicitly Explainable
    Among the pruning-based methods, there are also some classic methods that can be divided into the implicit explainability model category.
  • Dynamic ViT: This method uses a lightweight prediction module to estimate the importance of each token based on the current characteristics. This lightweight module is then added to different layers of ViT to prune redundant tokens in a hierarchical manner. Most importantly, this method enhances interpretability by gradually locating key image parts that contribute most to classification.
  • Efficient Vision Transformer (EViT): The core idea of ​​this method is to accelerate EViT by reorganizing tokens. By calculating attention scores, EViT retains the most relevant tokens while fusing less relevant tokens into additional tokens. At the same time, in order to evaluate the interpretability of EViT, the author of the paper visualized the token recognition process on multiple input images.

  • Possibly Explainable
    Although this type of method was not originally intended to improve the explainability of ViT, this type of method provides a lot of opportunities for further research on the explainability of the model. Great potential.
  • Patch Slimming: Accelerate ViT by focusing on redundant patches in images through a top-down approach. The algorithm selectively retains the ability of key patches to highlight important visual features, thereby enhancing interpretability.
  • Hierarchical Visual Transformer (HVT): This method is introduced to enhance the scalability and performance of ViT. As the model depth increases, the sequence length gradually decreases. Furthermore, by dividing ViT blocks into multiple stages and applying pooling operations at each stage, the computational efficiency is significantly improved. Given the progressive concentration on the most important components of the model, there is an opportunity to explore its potential impact on enhancing interpretability and explainability.
Inherently Explainable Methods

Among the different interpretable methods, there is a class of methods that mainly develops models that can explain algorithms intrinsically. However, these models are usually difficult to achieve with more complex ones. The same level of accuracy as the black box model. Therefore, a careful balance must be considered between interpretability and performance. Next, some classic works are briefly introduced.

  • ViT-CX: This method is a mask-based interpretation method customized for the ViT model. This approach relies on patch embedding and its impact on model output, rather than focusing on them. This method consists of two stages: mask generation and mask aggregation, thereby providing a more meaningful saliency map.
  • ViT-NeT: This method is a new neural tree decoder that describes the decision-making process through tree structures and prototypes. At the same time, the algorithm also allows for visual interpretation of the results.
  • R-Cut: This method enhances the interpretability of ViT through Relationship Weighted Out and Cut. This method includes two modules, namely Relationship Weighted Out and Cut modules. The former focuses on extracting specific classes of information from the middle layer, emphasizing relevant features. The latter performs fine-grained feature decomposition. By integrating both modules, dense class-specific interpretability maps can be generated.
Other Tasks

ViT-based architecture still needs to be explained for other computer vision tasks in the exploration. Some interpretability methods specifically targeted at other tasks have been proposed, and the latest work in related fields will be introduced below

  • eX-ViT: This algorithm is a new interpretable visual transformer based on weakly supervised semantic segmentation. In addition, in order to improve interpretability, an attribute-oriented loss module is introduced, which contains three losses: global-level attribute-oriented loss, local-level attribute discriminability loss, and attribute diversity loss. The former uses attention maps to create interpretable features, while the latter two enhance attribute learning.
  • DINO: This method is a simple self-supervised method and a self-distillation method without labels. The final learned attention map can effectively preserve the semantic areas of the image, thereby achieving interpretable purposes.
  • Generic Attention-model: This method is an algorithm model for prediction based on the Transformer architecture. The method is applied to the three most commonly used architectures, namely pure self-attention, self-attention combined with joint attention, and encoder-decoder attention. To test the interpretability of the model, the authors used a visual question answering task, however, it is also applicable to other CV tasks such as object detection and image segmentation.
  • ATMAN: This is a modality-agnostic perturbation method that uses an attention mechanism to generate a correlation map of the input relative to the output prediction. This approach attempts to understand deformation prediction through memory efficient attention operations.
  • Concept-Transformer: This algorithm generates explanations of model output by highlighting attention scores for user-defined high-level concepts, ensuring trustworthiness and reliability.

Future Outlook

Currently, algorithm models based on the Transformer architecture have achieved outstanding results in various computer vision tasks. However, there is currently a lack of obvious research on how to use interpretability methods to promote model debugging and improvement, and improve model fairness and reliability, especially in ViT applications.

This article aims to use image classification The task is to classify and organize the interpretability algorithm models based on Vision Transformer to help readers better understand the architecture of such models. I hope it will be helpful to everyone

更深层的理解视觉Transformer, 对视觉Transformer的剖析

What needs to be rewritten is: Original link: https://mp.weixin.qq.com/s/URkobeRNB8dEYzrECaC7tQ

The above is the detailed content of A deeper understanding of visual Transformer, analysis of visual Transformer. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software