


Paper address: https://arxiv.org/abs/2307.09283
Code address: https:/ /github.com/THU-MIG/RepViT
RepViT performs well in the mobile ViT architecture and shows significant advantages. Next, we explore the contributions of this study.
- It is mentioned in the article that lightweight ViTs generally perform better than lightweight CNNs on visual tasks, mainly due to their multi-head self-attention module (
MSHA
) allows the model to learn global representation. However, the architectural differences between lightweight ViTs and lightweight CNNs have not been fully studied. - In this study, the authors gradually improve the mobile-friendliness of standard lightweight CNNs (especially
MobileNetV3
) by integrating effective architectural choices of lightweight ViTs. This makes Derived the birth of a new pure lightweight CNN family, namelyRepViT
. It is worth noting that although RepViT has a MetaFormer structure, it is entirely composed of convolutions. - Experimental results It is shown that
RepViT
surpasses existing state-of-the-art lightweight ViTs and shows better performance and efficiency than existing state-of-the-art lightweight ViTs on various visual tasks, including ImageNet classification, Object detection and instance segmentation on COCO-2017, and semantic segmentation on ADE20k. In particular, onImageNet
,RepViT
achieved the best performance oniPhone 12
With a latency of nearly 1ms and a Top-1 accuracy of over 80%, this is the first breakthrough for a lightweight model.
Okay, what everyone should be concerned about next is "How How can a model designed with such low latency but high accuracy come out?
Method
ConvNeXt , the authors based on the
ResNet50 architecture through rigorous theoretical and experimental analysis, finally designed a very excellent pure convolutional neural network architecture comparable to
Swin-Transformer. Similarly,
RepViT also mainly performs targeted transformations by gradually integrating the architectural design of lightweight ViTs into standard lightweight CNN, namely
MobileNetV3-L ( Magic modification). In this process, the authors considered design elements at different levels of granularity and achieved the optimization goal through a series of steps.
Alignment of training recipes
In the paper, a new metric is introduced to measure latency on mobile devices and ensures that the training strategy is consistent with the currently popular lightweight Level ViTs remain consistent. The purpose of this initiative is to ensure the consistency of model training, which involves two key concepts of delay measurement and training strategy adjustment.
Latency Measurement Index
In order to more accurately measure the performance of the model on real mobile devices, the author chose to directly measure the actual delay of the model on the device. as a baseline measure. This metric differs from previous studies, which mainly optimize the model's inference speed through metrics such as FLOPs
or model size, which do not always reflect the actual latency in mobile applications well.
Alignment of training strategy
Here, the training strategy of MobileNetV3-L is adjusted to align with other lightweight ViTs models. This includes using the AdamW
optimizer [a must-have optimizer for ViTs models], 5 epochs of warm-up training, and 300 epochs of training using cosine annealing learning rate scheduling. Although this adjustment results in a slight decrease in model accuracy, fairness is guaranteed.
Optimization of Block Design
Next, the authors explored the optimal block design based on consistent training settings. Block design is an important component in CNN architecture, and optimizing block design can help improve the performance of the network.
Separate Token mixer and channel mixer
This is mainly for MobileNetV3-L
The block structure was improved to separate token mixers and channel mixers. The original MobileNetV3 block structure consists of a 1x1 dilated convolution, followed by a depthwise convolution and a 1x1 projection layer, and then connects the input and output via residual connections. On this basis, RepViT advances the depth convolution so that the channel mixer and token mixer can be separated. To improve performance, structural reparameterization is also introduced to introduce multi-branch topology for deep filters during training. Finally, the authors succeeded in separating the token mixer and the channel mixer in the MobileNetV3 block and named such block the RepViT block.
Reduce the dilation ratio and increase the width
In the channel mixer, the original dilation ratio is 4, which means that the hidden dimension of the MLP block is four times the input dimension times, which consumes a large amount of computing resources and has a great impact on the inference time. To alleviate this problem, we can reduce the dilation ratio to 2, thereby reducing parameter redundancy and latency, bringing the latency of MobileNetV3-L down to 0.65ms. Subsequently, by increasing the width of the network, i.e. increasing the number of channels at each stage, the Top-1 accuracy increased to 73.5%, while the latency only increased to 0.89ms!
Optimization of macro-architectural elements
In this step, this article further optimizes the performance of MobileNetV3-L on mobile devices, mainly starting from the macro-architectural elements, including stem, downsampling layer, classifier as well as overall stage proportions. By optimizing these macro-architectural elements, the performance of the model can be significantly improved.
Shallow network using convolutional extractor
Picture
ViTs typically use a "patchify" operation that splits the input image into non-overlapping patches as the stem. However, this approach has problems with training optimization and sensitivity to training recipes. Therefore, the authors adopted early convolution instead, an approach that has been adopted by many lightweight ViTs. In contrast, MobileNetV3-L uses a more complex stem for 4x downsampling. In this way, although the initial number of filters is increased to 24, the total latency is reduced to 0.86ms, while the top-1 accuracy increases to 73.9%.
Deeper Downsampling Layers
In ViTs, spatial downsampling is usually implemented through a separate patch merging layer. So here we can adopt a separate and deeper downsampling layer to increase the network depth and reduce the information loss due to resolution reduction. Specifically, the authors first used a 1x1 convolution to adjust the channel dimension, and then connected the input and output of two 1x1 convolutions through residuals to form a feedforward network. Additionally, they added a RepViT block in front to further deepen the downsampling layer, a step that improved the top-1 accuracy to 75.4% with a latency of 0.96ms.
Simpler classifier
In lightweight ViTs, the classifier is usually followed by a global average pooling layer consists of a linear layer. In contrast, MobileNetV3-L uses a more complex classifier. Because the final stage now has more channels, the authors replaced it with a simple classifier, a global average pooling layer and a linear layer. This step reduced the latency to 0.77ms while being top-1 accurate. The rate is 74.8%.
Overall stage proportion
The stage proportion represents the proportion of the number of blocks in different stages, thus indicating the distribution of calculations in each stage. The paper chooses a more optimal stage ratio of 1:1:7:1, and then increases the network depth to 2:2:14:2, thereby achieving a deeper layout. This step increases top-1 accuracy to 76.9% with a latency of 1.02 ms.
Adjustment of micro-design
Next, RepViT adjusts the lightweight CNN through layer-by-layer micro design, which includes selecting the appropriate convolution kernel size and optimizing squeeze-excitation (Squeeze- and-excitation, referred to as SE) layer location. Both methods significantly improve model performance.
Selection of convolution kernel size
It is well known that the performance and latency of CNNs are usually affected by the size of the convolution kernel. For example, to model long-range context dependencies like MHSA, ConvNeXt uses large convolutional kernels, resulting in significant performance improvements. However, large convolution kernels are not mobile-friendly due to its computational complexity and memory access cost. MobileNetV3-L mainly uses 3x3 convolutions, and 5x5 convolutions are used in some blocks. The authors replaced them with 3x3 convolutions, which resulted in latency being reduced to 1.00ms while maintaining a top-1 accuracy of 76.9%.
Position of SE layer
One advantage of self-attention modules over convolutions is the ability to adjust weights based on the input, which is called a data-driven property. As a channel attention module, the SE layer can make up for the limitations of convolution in the lack of data-driven properties, thereby leading to better performance. MobileNetV3-L adds SE layers in some blocks, mainly focusing on the last two stages. However, the lower-resolution stage gains smaller accuracy gains from the global average pooling operation provided by SE than the higher-resolution stage. The authors designed a strategy to use the SE layer in a cross-block manner at all stages to maximize the accuracy improvement with the smallest delay increment. This step improved the top-1 accuracy to 77.4% while delaying reduced to 0.87ms. [In fact, Baidu has already done experiments and comparisons on this point and reached this conclusion a long time ago. The SE layer is more effective when placed close to the deep layer]
Network Architecture
Finally, by integrating the above improvement strategies, we obtained the overall architecture of the model RepViT
, which has multiple variants, such as RepViT-M1/ M2/M3
. Likewise, the different variants are mainly distinguished by the number of channels and blocks per stage.
Experiment
Image Classification
The above is the detailed content of 1.3ms takes 1.3ms! Tsinghua's latest open source mobile neural network architecture RepViT. For more information, please follow other related articles on the PHP Chinese website!

近年来,图神经网络(GNN)取得了快速、令人难以置信的进展。图神经网络又称为图深度学习、图表征学习(图表示学习)或几何深度学习,是机器学习特别是深度学习领域增长最快的研究课题。本次分享的题目为《GNN的基础、前沿和应用》,主要介绍由吴凌飞、崔鹏、裴健、赵亮几位学者牵头编撰的综合性书籍《图神经网络基础、前沿与应用》中的大致内容。一、图神经网络的介绍1、为什么要研究图?图是一种描述和建模复杂系统的通用语言。图本身并不复杂,它主要由边和结点构成。我们可以用结点表示任何我们想要建模的物体,可以用边表示两

当前主流的AI芯片主要分为三类,GPU、FPGA、ASIC。GPU、FPGA均是前期较为成熟的芯片架构,属于通用型芯片。ASIC属于为AI特定场景定制的芯片。行业内已经确认CPU不适用于AI计算,但是在AI应用领域也是必不可少。 GPU方案GPU与CPU的架构对比CPU遵循的是冯·诺依曼架构,其核心是存储程序/数据、串行顺序执行。因此CPU的架构中需要大量的空间去放置存储单元(Cache)和控制单元(Control),相比之下计算单元(ALU)只占据了很小的一部分,所以CPU在进行大规模并行计算

在我的世界(Minecraft)中,红石是一种非常重要的物品。它是游戏中的一种独特材料,开关、红石火把和红石块等能对导线或物体提供类似电流的能量。红石电路可以为你建造用于控制或激活其他机械的结构,其本身既可以被设计为用于响应玩家的手动激活,也可以反复输出信号或者响应非玩家引发的变化,如生物移动、物品掉落、植物生长、日夜更替等等。因此,在我的世界中,红石能够控制的机械类别极其多,小到简单机械如自动门、光开关和频闪电源,大到占地巨大的电梯、自动农场、小游戏平台甚至游戏内建的计算机。近日,B站UP主@

当风大到可以把伞吹坏的程度,无人机却稳稳当当,就像这样:御风飞行是空中飞行的一部分,从大的层面来讲,当飞行员驾驶飞机着陆时,风速可能会给他们带来挑战;从小的层面来讲,阵风也会影响无人机的飞行。目前来看,无人机要么在受控条件下飞行,无风;要么由人类使用遥控器操作。无人机被研究者控制在开阔的天空中编队飞行,但这些飞行通常是在理想的条件和环境下进行的。然而,要想让无人机自主执行必要但日常的任务,例如运送包裹,无人机必须能够实时适应风况。为了让无人机在风中飞行时具有更好的机动性,来自加州理工学院的一组工

1 什么是对比学习1.1 对比学习的定义1.2 对比学习的原理1.3 经典对比学习算法系列2 对比学习的应用3 对比学习在转转的实践3.1 CL在推荐召回的实践3.2 CL在转转的未来规划1 什么是对比学习1.1 对比学习的定义对比学习(Contrastive Learning, CL)是近年来 AI 领域的热门研究方向,吸引了众多研究学者的关注,其所属的自监督学习方式,更是在 ICLR 2020 被 Bengio 和 LeCun 等大佬点名称为 AI 的未来,后陆续登陆 NIPS, ACL,

本文由Cristian Bodnar 和Fabrizio Frasca 合著,以 C. Bodnar 、F. Frasca 等人发表于2021 ICML《Weisfeiler and Lehman Go Topological: 信息传递简单网络》和2021 NeurIPS 《Weisfeiler and Lehman Go Cellular: CW 网络》论文为参考。本文仅是通过微分几何学和代数拓扑学的视角讨论图神经网络系列的部分内容。从计算机网络到大型强子对撞机中的粒子相互作用,图可以用来模

AI面部识别领域又开辟新业务了?这次,是鉴别二战时期老照片里的人脸图像。近日,来自谷歌的一名软件工程师Daniel Patt 研发了一项名为N2N(Numbers to Names)的 AI人脸识别技术,它可识别二战前欧洲和大屠杀时期的照片,并将他们与现代的人们联系起来。用AI寻找失散多年的亲人2016年,帕特在参观华沙波兰裔犹太人纪念馆时,萌生了一个想法。这一张张陌生的脸庞,会不会与自己存在血缘的联系?他的祖父母/外祖父母中有三位是来自波兰的大屠杀幸存者,他想帮助祖母找到被纳粹杀害的家人的照

OTO 是业内首个自动化、一站式、用户友好且通用的神经网络训练与结构压缩框架。 在人工智能时代,如何部署和维护神经网络是产品化的关键问题考虑到节省运算成本,同时尽可能小地损失模型性能,压缩神经网络成为了 DNN 产品化的关键之一。DNN 压缩通常来说有三种方式,剪枝,知识蒸馏和量化。剪枝旨在识别并去除冗余结构,给 DNN 瘦身的同时尽可能地保持模型性能,是最为通用且有效的压缩方法。三种方法通常来讲可以相辅相成,共同作用来达到最佳的压缩效果。然而现存的剪枝方法大都只针对特定模型,特定任务,且需要很


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Zend Studio 13.0.1
Powerful PHP integrated development environment

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function
