search
HomeTechnology peripheralsAIMicrosoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation

Re-expressed: Research motivation


Mask modeling (MIM, MAE) has proven to be a very effective self-supervised training method. However, as shown in Figure 1, MIM works relatively better for larger models. When the model is very small (such as ViT-T 5M parameters, such a model is very important for the real world), MIM may even reduce the effect of the model to a certain extent. For example, the classification effect of ViT-L trained with MAE on ImageNet is 3.3% higher than that of the model trained with ordinary supervision, but the classification effect of ViT-T trained with MAE on ImageNet is 0.6% lower than that of the model trained with ordinary supervision.

In this work we proposed TinyMIM, which uses Distillation methods transfer knowledge from large models to small models.

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



  • ##Paper address :https://arxiv.org/pdf/2301.01296.pdf
  • Code address: https://github.com/OliverRensu/TinyMIM

We systematically studied the impact of distillation objectives, data enhancement, regularization, auxiliary loss functions, etc. on distillation. In the case of strictly using only ImageNet-1K as training data (including the Teacher model also only using ImageNet-1K training) and ViT-B as the model, our method achieves the best performance currently. As shown in the picture:

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



#put our The method (TinyMIM) is compared with the mask reconstruction-based method MAE, and the supervised learning method DeiT trained from scratch. MAE has significant performance improvements when the model is relatively large, but when the model is relatively small, the improvement is limited and may even harm the final effect of the model. Our method, TinyMIM, achieves substantial improvements across different model sizes.

Our contributions are as follows:

1. Distillation targets: 1) Distillation token The relationship between them is more effective than distilling class tokens or feature maps alone; 2) It is more effective to use the middle layer as the target of distillation.
2. Data enhancement and model regularization (Data and network regularization): 1) The effect of using masked images is worse; 2) The student model requires a drop path, but the teacher model does not.
3. Auxiliary losses: MIM is meaningless as an auxiliary loss function.
4. Macro distillation strategy: We found that serialized distillation (ViT-B -> ViT-S -> ViT-T) works best.

2. Method

# #


We systematically investigated the distillation goals, input images, and distillation target modules.

2.1 Factors affecting the distillation effect

1) Features:

a. Intermediate block features and output features

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



##When i=L, it refers to the characteristics of the Transformer output layer. When i

b. Attention (Attention) features and feed-forward layer (FFN) layer features

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



Transformer Each block has an Attention layer and a FFN layer. Different distillation layers will have different effects.

c.QKV Features

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



##There will be Q, K, and V features in the Attention layer. These features are used to calculate the attention mechanism. We have also investigated direct distillation of these features.

2) Relationship

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation


##Q, K, V are used to calculate the attention map, and the relationship between these features can also be used as the target of knowledge distillation.
#3) Input: whether to mask
Traditional knowledge distillation is to directly input the complete image. Our method is to explore the distillation mask modeling model, so we also explore whether masked images are suitable as input for knowledge distillation.

2.2 Comparison of knowledge distillation methods
1) Class Token distillation:
The simplest method is to directly distill the class token of the MAE pre-trained model similar to DeiT:

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation


where
refers to the class token of the student model, and
refers to the class token of the teacher model.
Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillationMicrosoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation
2) Feature distillation: We directly refer to feature distillation [1] as a comparison

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation

##3) Relational Distillation: We proposed also The default distillation strategy of this article


Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation


## 3. Experiment

3.1 Main experimental results

Our method is Pre-trained on ImageNet-1K, and the teacher model is also pre-trained on ImageNet-1K. We then fine-tuned our pre-trained model on downstream tasks (classification, semantic segmentation). The model performance is as shown in the figure:



Our method significantly outperforms previous MAE-based methods, especially for small models. Specifically, for the ultra-small model ViT-T, our method achieves a classification accuracy of 75.8%, an improvement of 4.2 compared to the MAE baseline model. For the small model ViT-S, we achieve 83.0% classification accuracy, an improvement of 1.4 over the previous best method. For Base-sized models, our method outperforms the MAE baseline model and the previous best model by CAE 4.1 and 2.0, respectively.

At the same time, we also tested the robustness of the model, as shown in the figure:

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



Compared with MAE-B, TinyMIM-B improved by 6.4 and 4.6 in ImageNet-A and ImageNet-R respectively.

3.2 Ablation experiment

1) Distillation of different relationships

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



## Distills the QK, VV relationship at the same time and has Softmax when calculating the relationship Achieved the best results.

2) Different distillation strategies

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



TinyMIM This method of distilling relationships achieves better results than the MAE baseline model, class token distillation, and feature map distillation, and this is true for models of various sizes. .

3) Distillation middle layer

Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation



We found that the eighteenth layer of distillation achieved the best results.

4. Conclusion

In this article, we proposed TinyMIM, which is The first model to successfully enable small models to benefit from mask reconstruction modeling (MIM) pre-training. Instead of adopting mask reconstruction as the task, we pre-train the small model by training the small model to simulate the relationships of the large model in a knowledge distillation manner. TinyMIM's success can be attributed to a comprehensive study of various factors that may affect TinyMIM pre-training, including distillation targets, distillation inputs, and intermediate layers. Through extensive experiments, we conclude that relation distillation is superior to feature distillation and class label distillation, etc. With its simplicity and powerful performance, we hope that our method will provide a solid foundation for future research.

[1] Wei, Y., Hu, H., Xie, Z., Zhang, Z., Cao, Y., Bao, J. , ... & Guo, B. (2022). Contrastive learning rivals masked image modeling in fine-tuning via feature distillation. arXiv preprint arXiv:2205.14141.

The above is the detailed content of Microsoft Research Asia launches TinyMIM: improving the performance of small ViT through knowledge distillation. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function