search
HomeTechnology peripheralsAIDeepMind: Who said convolutional networks are inferior to ViT?

DeepMind: Who said convolutional networks are inferior to ViT?

Nov 02, 2023 am 09:13 AM
deepmindprojectconvnets

This paper evaluates scaled-up NFNets and challenges the idea that ConvNets perform worse than ViTs on large-scale problems

The early success of deep learning can be attributed to the development of convolutional neural networks (ConvNets). ConvNets have dominated computer vision benchmarks for nearly a decade. In recent years, however, they have been increasingly replaced by ViTs (Vision Transformers).

Many people believe that ConvNets perform well on small or medium-sized data sets, but cannot compete with ViTs on larger network-sized data sets.

At the same time, the CV community has moved from evaluating the performance of randomly initialized networks on specific datasets (such as ImageNet) to evaluating the performance of networks pretrained on large general datasets collected from the network. This leads to an important question: do Vision Transformers outperform pre-trained ConvNets architectures under similar computational budgets?

In this article, researchers from Google DeepMind studied this problem. They obtained performance similar to ViTs on ImageNet by pre-training multiple NFNet models on the JFT-4B dataset of different scales

DeepMind: Who said convolutional networks are inferior to ViT?

Paper link address: https://arxiv.org/pdf/2310.16764.pdf

The research in this paper discusses the situation of pre-training computing budget between 0.4k and 110k TPU-v4 core computing hours, and taking advantage of increasing the NFNet model family depth and width to perform a series of network training. Research has found that there is a log-log expansion rate (scaling law) between held out loss and computing budget

For example, this article will be based on JFT-4B, running on TPU-v4 core hours (core hours) scaled from 0.4k to 110k and pre-trained on NFNet. After fine-tuning, the largest model achieved 90.4% accuracy on ImageNet Top-1, competing with the pre-trained ViT model under the same computational budget

DeepMind: Who said convolutional networks are inferior to ViT?

It can be said , this paper challenges the notion that ConvNets perform worse than ViTs on large-scale datasets by evaluating scaled-up NFNets. Furthermore, given sufficient data and computation, ConvNets remain competitive, and model design and resources are more important than architecture.

After seeing this research, Turing Award winner Yann LeCun said: "Under a given amount of calculation, ViT and ConvNets are computationally equivalent. Although ViTs has achieved impressive results in computer vision, Impressive success, but in my opinion there is no strong evidence that pre-trained ViT outperforms pre-trained ConvNets when evaluated fairly."

DeepMind: Who said convolutional networks are inferior to ViT?

However, some netizens commented on LeCun's comments that he believed that using ViT in multi-modal models may still give it an advantage in research

Researchers from Google DeepMind said that ConvNets will never disappear

DeepMind: Who said convolutional networks are inferior to ViT?

Next let’s look at the specific content of the paper.

Pre-trained NFNets follow the expansion law

This article trained a series of NFNet models of different depths and widths on JFT-4B.

As shown in Figure 2, the validation loss is linearly related to the computational budget of the training model, which is consistent with the log-log expansion law observed when using Transformer for language modeling. As the computational budget increases, the optimal model size and optimal epoch budget (to achieve the lowest validation loss) also increase

DeepMind: Who said convolutional networks are inferior to ViT?

In the chart below, we can see The optimal learning rate (i.e., minimizing validation loss) observed for the three models over a range of epoch budgets. The researchers found that for lower epoch budgets, the NFNet family of models all showed similar optimal learning rates, around 1.6. However, the optimal learning rate decreases as the epoch budget increases, and decreases faster for larger models. The researchers say that it can be assumed that the optimal learning rate decreases slowly and monotonically with increasing model size and epoch budget, so the learning rate can be effectively adjusted between trials

DeepMind: Who said convolutional networks are inferior to ViT?

What needs to be rewritten is: It should be noted that some of the pre-trained models in Figure 2 did not perform as expected. The research team believes that the reason for this situation is that if the training run is preempted/restarted, the data loading process cannot guarantee that each training sample can be sampled once in each epoch. If the training run is restarted multiple times, it may result in some training samples being undersampled

NFNet vs ViT

Experiments on ImageNet show that after fine-tuning The performance of NFNet and Vision Transformer is quite

Specifically, this study fine-tuned the pre-trained NFNet on ImageNet and plotted the relationship between pre-training calculation and Top-1 error, as shown in Figure 1 above.

As budget increases, ImageNet Top-1 accuracy continues to improve. Among them, the most expensive pre-trained model is NFNet-F7, which is pre-trained for 8 epochs and has an accuracy of 90.3% in ImageNet Top-1. Pretraining and fine-tuning require approximately 110k TPU-v4 core hours and 1.6k TPU-v4 core hours. Furthermore, if additional repetitive enhancement techniques are introduced during fine-tuning, a Top-1 accuracy of 90.4% can be achieved. NFNet benefits greatly from large-scale pre-training

Despite the obvious differences between the two model architectures NFNet and ViT, pre-trained NFNet and pre-trained ViT are comparable in performance . For example, after pre-training JFT-3B with 210k TPU-v3 core hours, ViT-g/14 achieved a Top-1 accuracy of 90.2% on ImageNet; while training JFT-3B with more than 500k TPU-v3 After core hours of pre-training, ViT-G/14 achieved a Top-1 accuracy of 90.45%

This article evaluates the pre-training speed of these models on TPU-v4 and estimates ViT-g/14 120k TPU-v4 core hours are required to pre-train, while ViTG/14 will require 280k TPU-v4 core hours, and SoViT-400m/14 will require 130k TPU-v4 core hours. This paper uses these estimates to compare the pre-training efficiency of ViT and NFNet in Figure 1. The study noted that NFNet is optimized for TPU-v4 and performs poorly when evaluated on other devices.

Finally, this paper notes that on JFT-4B, pre-trained checkpoints achieve the lowest validation loss, but after fine-tuning, do not always achieve the highest Top-1 accuracy on ImageNet . In particular, this paper finds that under a fixed pre-training computational budget, the fine-tuning mechanism tends to select a slightly larger model and a slightly smaller epoch budget. Intuitively, larger models have greater capacity and are therefore better able to adapt to new tasks. In some cases, a slightly larger learning rate (during pre-training) can also lead to better performance after fine-tuning

The above is the detailed content of DeepMind: Who said convolutional networks are inferior to ViT?. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software