Home > Article > Technology peripherals > 1.3ms takes 1.3ms! Tsinghua’s latest open source mobile neural network architecture RepViT
Paper address: https://arxiv.org/abs/2307.09283
Code address: https:/ /github.com/THU-MIG/RepViT
RepViT performs well in the mobile ViT architecture and shows significant advantages. Next, we explore the contributions of this study.
MSHA
) allows the model to learn global representation. However, the architectural differences between lightweight ViTs and lightweight CNNs have not been fully studied. MobileNetV3
) by integrating effective architectural choices of lightweight ViTs. This makes Derived the birth of a new pure lightweight CNN family, namely RepViT
. It is worth noting that although RepViT has a MetaFormer structure, it is entirely composed of convolutions. RepViT
surpasses existing state-of-the-art lightweight ViTs and shows better performance and efficiency than existing state-of-the-art lightweight ViTs on various visual tasks, including ImageNet classification, Object detection and instance segmentation on COCO-2017, and semantic segmentation on ADE20k. In particular, on ImageNet
, RepViT
achieved the best performance on iPhone 12
With a latency of nearly 1ms and a Top-1 accuracy of over 80%, this is the first breakthrough for a lightweight model. Okay, what everyone should be concerned about next is "How How can a model designed with such low latency but high accuracy come out?
ConvNeXt , the authors based on the
ResNet50 architecture through rigorous theoretical and experimental analysis, finally designed a very excellent pure convolutional neural network architecture comparable to
Swin-Transformer. Similarly,
RepViT also mainly performs targeted transformations by gradually integrating the architectural design of lightweight ViTs into standard lightweight CNN, namely
MobileNetV3-L ( Magic modification). In this process, the authors considered design elements at different levels of granularity and achieved the optimization goal through a series of steps.
In the paper, a new metric is introduced to measure latency on mobile devices and ensures that the training strategy is consistent with the currently popular lightweight Level ViTs remain consistent. The purpose of this initiative is to ensure the consistency of model training, which involves two key concepts of delay measurement and training strategy adjustment.
Latency Measurement Index
In order to more accurately measure the performance of the model on real mobile devices, the author chose to directly measure the actual delay of the model on the device. as a baseline measure. This metric differs from previous studies, which mainly optimize the model's inference speed through metrics such as FLOPs
or model size, which do not always reflect the actual latency in mobile applications well.
Alignment of training strategy
Here, the training strategy of MobileNetV3-L is adjusted to align with other lightweight ViTs models. This includes using the AdamW
optimizer [a must-have optimizer for ViTs models], 5 epochs of warm-up training, and 300 epochs of training using cosine annealing learning rate scheduling. Although this adjustment results in a slight decrease in model accuracy, fairness is guaranteed.
Next, the authors explored the optimal block design based on consistent training settings. Block design is an important component in CNN architecture, and optimizing block design can help improve the performance of the network.
Separate Token mixer and channel mixer
This is mainly for MobileNetV3-L
The block structure was improved to separate token mixers and channel mixers. The original MobileNetV3 block structure consists of a 1x1 dilated convolution, followed by a depthwise convolution and a 1x1 projection layer, and then connects the input and output via residual connections. On this basis, RepViT advances the depth convolution so that the channel mixer and token mixer can be separated. To improve performance, structural reparameterization is also introduced to introduce multi-branch topology for deep filters during training. Finally, the authors succeeded in separating the token mixer and the channel mixer in the MobileNetV3 block and named such block the RepViT block.
Reduce the dilation ratio and increase the width
In the channel mixer, the original dilation ratio is 4, which means that the hidden dimension of the MLP block is four times the input dimension times, which consumes a large amount of computing resources and has a great impact on the inference time. To alleviate this problem, we can reduce the dilation ratio to 2, thereby reducing parameter redundancy and latency, bringing the latency of MobileNetV3-L down to 0.65ms. Subsequently, by increasing the width of the network, i.e. increasing the number of channels at each stage, the Top-1 accuracy increased to 73.5%, while the latency only increased to 0.89ms!
In this step, this article further optimizes the performance of MobileNetV3-L on mobile devices, mainly starting from the macro-architectural elements, including stem, downsampling layer, classifier as well as overall stage proportions. By optimizing these macro-architectural elements, the performance of the model can be significantly improved.
Shallow network using convolutional extractor
Picture
ViTs typically use a "patchify" operation that splits the input image into non-overlapping patches as the stem. However, this approach has problems with training optimization and sensitivity to training recipes. Therefore, the authors adopted early convolution instead, an approach that has been adopted by many lightweight ViTs. In contrast, MobileNetV3-L uses a more complex stem for 4x downsampling. In this way, although the initial number of filters is increased to 24, the total latency is reduced to 0.86ms, while the top-1 accuracy increases to 73.9%.
In ViTs, spatial downsampling is usually implemented through a separate patch merging layer. So here we can adopt a separate and deeper downsampling layer to increase the network depth and reduce the information loss due to resolution reduction. Specifically, the authors first used a 1x1 convolution to adjust the channel dimension, and then connected the input and output of two 1x1 convolutions through residuals to form a feedforward network. Additionally, they added a RepViT block in front to further deepen the downsampling layer, a step that improved the top-1 accuracy to 75.4% with a latency of 0.96ms.
Simpler classifier
In lightweight ViTs, the classifier is usually followed by a global average pooling layer consists of a linear layer. In contrast, MobileNetV3-L uses a more complex classifier. Because the final stage now has more channels, the authors replaced it with a simple classifier, a global average pooling layer and a linear layer. This step reduced the latency to 0.77ms while being top-1 accurate. The rate is 74.8%.
Overall stage proportion
The stage proportion represents the proportion of the number of blocks in different stages, thus indicating the distribution of calculations in each stage. The paper chooses a more optimal stage ratio of 1:1:7:1, and then increases the network depth to 2:2:14:2, thereby achieving a deeper layout. This step increases top-1 accuracy to 76.9% with a latency of 1.02 ms.
Next, RepViT adjusts the lightweight CNN through layer-by-layer micro design, which includes selecting the appropriate convolution kernel size and optimizing squeeze-excitation (Squeeze- and-excitation, referred to as SE) layer location. Both methods significantly improve model performance.
Selection of convolution kernel size
It is well known that the performance and latency of CNNs are usually affected by the size of the convolution kernel. For example, to model long-range context dependencies like MHSA, ConvNeXt uses large convolutional kernels, resulting in significant performance improvements. However, large convolution kernels are not mobile-friendly due to its computational complexity and memory access cost. MobileNetV3-L mainly uses 3x3 convolutions, and 5x5 convolutions are used in some blocks. The authors replaced them with 3x3 convolutions, which resulted in latency being reduced to 1.00ms while maintaining a top-1 accuracy of 76.9%.
Position of SE layer
One advantage of self-attention modules over convolutions is the ability to adjust weights based on the input, which is called a data-driven property. As a channel attention module, the SE layer can make up for the limitations of convolution in the lack of data-driven properties, thereby leading to better performance. MobileNetV3-L adds SE layers in some blocks, mainly focusing on the last two stages. However, the lower-resolution stage gains smaller accuracy gains from the global average pooling operation provided by SE than the higher-resolution stage. The authors designed a strategy to use the SE layer in a cross-block manner at all stages to maximize the accuracy improvement with the smallest delay increment. This step improved the top-1 accuracy to 77.4% while delaying reduced to 0.87ms. [In fact, Baidu has already done experiments and comparisons on this point and reached this conclusion a long time ago. The SE layer is more effective when placed close to the deep layer]
Finally, by integrating the above improvement strategies, we obtained the overall architecture of the model RepViT
, which has multiple variants, such as RepViT-M1/ M2/M3
. Likewise, the different variants are mainly distinguished by the number of channels and blocks per stage.
The above is the detailed content of 1.3ms takes 1.3ms! Tsinghua’s latest open source mobile neural network architecture RepViT. For more information, please follow other related articles on the PHP Chinese website!