Home > Article > Technology peripherals > DeepMind: Who said convolutional networks are inferior to ViT?
This paper evaluates scaled-up NFNets and challenges the idea that ConvNets perform worse than ViTs on large-scale problems
The early success of deep learning can be attributed to the development of convolutional neural networks (ConvNets). ConvNets have dominated computer vision benchmarks for nearly a decade. In recent years, however, they have been increasingly replaced by ViTs (Vision Transformers).
Many people believe that ConvNets perform well on small or medium-sized data sets, but cannot compete with ViTs on larger network-sized data sets.
At the same time, the CV community has moved from evaluating the performance of randomly initialized networks on specific datasets (such as ImageNet) to evaluating the performance of networks pretrained on large general datasets collected from the network. This leads to an important question: do Vision Transformers outperform pre-trained ConvNets architectures under similar computational budgets?
In this article, researchers from Google DeepMind studied this problem. They obtained performance similar to ViTs on ImageNet by pre-training multiple NFNet models on the JFT-4B dataset of different scales
Paper link address: https://arxiv.org/pdf/2310.16764.pdf
The research in this paper discusses the situation of pre-training computing budget between 0.4k and 110k TPU-v4 core computing hours, and taking advantage of increasing the NFNet model family depth and width to perform a series of network training. Research has found that there is a log-log expansion rate (scaling law) between held out loss and computing budget
For example, this article will be based on JFT-4B, running on TPU-v4 core hours (core hours) scaled from 0.4k to 110k and pre-trained on NFNet. After fine-tuning, the largest model achieved 90.4% accuracy on ImageNet Top-1, competing with the pre-trained ViT model under the same computational budget
It can be said , this paper challenges the notion that ConvNets perform worse than ViTs on large-scale datasets by evaluating scaled-up NFNets. Furthermore, given sufficient data and computation, ConvNets remain competitive, and model design and resources are more important than architecture.
After seeing this research, Turing Award winner Yann LeCun said: "Under a given amount of calculation, ViT and ConvNets are computationally equivalent. Although ViTs has achieved impressive results in computer vision, Impressive success, but in my opinion there is no strong evidence that pre-trained ViT outperforms pre-trained ConvNets when evaluated fairly."
However, some netizens commented on LeCun's comments that he believed that using ViT in multi-modal models may still give it an advantage in research
Researchers from Google DeepMind said that ConvNets will never disappear
Next let’s look at the specific content of the paper.
Pre-trained NFNets follow the expansion law
This article trained a series of NFNet models of different depths and widths on JFT-4B.
As shown in Figure 2, the validation loss is linearly related to the computational budget of the training model, which is consistent with the log-log expansion law observed when using Transformer for language modeling. As the computational budget increases, the optimal model size and optimal epoch budget (to achieve the lowest validation loss) also increase
In the chart below, we can see The optimal learning rate (i.e., minimizing validation loss) observed for the three models over a range of epoch budgets. The researchers found that for lower epoch budgets, the NFNet family of models all showed similar optimal learning rates, around 1.6. However, the optimal learning rate decreases as the epoch budget increases, and decreases faster for larger models. The researchers say that it can be assumed that the optimal learning rate decreases slowly and monotonically with increasing model size and epoch budget, so the learning rate can be effectively adjusted between trials
What needs to be rewritten is: It should be noted that some of the pre-trained models in Figure 2 did not perform as expected. The research team believes that the reason for this situation is that if the training run is preempted/restarted, the data loading process cannot guarantee that each training sample can be sampled once in each epoch. If the training run is restarted multiple times, it may result in some training samples being undersampled
NFNet vs ViT
Experiments on ImageNet show that after fine-tuning The performance of NFNet and Vision Transformer is quite
Specifically, this study fine-tuned the pre-trained NFNet on ImageNet and plotted the relationship between pre-training calculation and Top-1 error, as shown in Figure 1 above.
As budget increases, ImageNet Top-1 accuracy continues to improve. Among them, the most expensive pre-trained model is NFNet-F7, which is pre-trained for 8 epochs and has an accuracy of 90.3% in ImageNet Top-1. Pretraining and fine-tuning require approximately 110k TPU-v4 core hours and 1.6k TPU-v4 core hours. Furthermore, if additional repetitive enhancement techniques are introduced during fine-tuning, a Top-1 accuracy of 90.4% can be achieved. NFNet benefits greatly from large-scale pre-training
Despite the obvious differences between the two model architectures NFNet and ViT, pre-trained NFNet and pre-trained ViT are comparable in performance . For example, after pre-training JFT-3B with 210k TPU-v3 core hours, ViT-g/14 achieved a Top-1 accuracy of 90.2% on ImageNet; while training JFT-3B with more than 500k TPU-v3 After core hours of pre-training, ViT-G/14 achieved a Top-1 accuracy of 90.45%
This article evaluates the pre-training speed of these models on TPU-v4 and estimates ViT-g/14 120k TPU-v4 core hours are required to pre-train, while ViTG/14 will require 280k TPU-v4 core hours, and SoViT-400m/14 will require 130k TPU-v4 core hours. This paper uses these estimates to compare the pre-training efficiency of ViT and NFNet in Figure 1. The study noted that NFNet is optimized for TPU-v4 and performs poorly when evaluated on other devices.
Finally, this paper notes that on JFT-4B, pre-trained checkpoints achieve the lowest validation loss, but after fine-tuning, do not always achieve the highest Top-1 accuracy on ImageNet . In particular, this paper finds that under a fixed pre-training computational budget, the fine-tuning mechanism tends to select a slightly larger model and a slightly smaller epoch budget. Intuitively, larger models have greater capacity and are therefore better able to adapt to new tasks. In some cases, a slightly larger learning rate (during pre-training) can also lead to better performance after fine-tuning
The above is the detailed content of DeepMind: Who said convolutional networks are inferior to ViT?. For more information, please follow other related articles on the PHP Chinese website!