


The future of CV is on these 68 pictures? Google Brain takes a deep look at ImageNet: top models all fail to predict
In the past ten years, ImageNet has basically been the "barometer" in the field of computer vision. If the accuracy rate has improved, you will know whether there is a new technology coming out.
"Brushing the list" has always been the driving force for model innovation, pushing the model's Top-1 accuracy to 90%, which is higher than humans.
#But is the ImageNet dataset really as useful as we think?
Many papers have questioned ImageNet, such as data coverage, bias issues, whether labels are complete, etc.
The most important thing is, is the 90% accuracy of the model really accurate?
Recently, researchers from the Google Brain team and the University of California, Berkeley, re-examined the prediction results of several sota models and found that the true accuracy of the models may have been underestimated!
Paper link: https://arxiv.org/pdf/2205.04596.pdf
Every mistake researchers make by testing some top models Perform manual review and classification to gain insights into long-tail errors on benchmark datasets.
The main focus is on the multi-label subset evaluation of ImageNet. The best model has been able to achieve a Top-1 accuracy of 97%.
The study’s analysis shows that nearly half of the so-called prediction errors were not errors at all and were also found in the picture New multi-labels have been added, which means that if the prediction results have not been manually reviewed, the performance of these models may be "underestimated"!
Unskilled crowdsourced data annotators often label data incorrectly, which greatly affects the authenticity of the model accuracy.
In order to calibrate the ImageNet data set and promote good progress in the future, the researchers provide an updated version of the multi-label evaluation set in the article, and combine 68 examples with obvious errors in the sota model predictions into a new data Collect ImageNet-Major to facilitate future CV researchers to overcome these bad cases
Pay off "technical debt"
Just start from the title of the article "When does dough become bagel?" It can be seen that the author mainly focuses on the label issue in ImageNet, which is also a historical issue.
The picture below is a very typical example of label ambiguity. The label in the picture is "dough", and the model's prediction result is "bagel". Is it wrong?
Theoretically speaking, this model has no prediction error, because the dough is baking and is about to become a bagel, so it is both dough and bagel.
It can be seen that the model has actually been able to predict that this dough will "become" a bagel, but it did not get this score in terms of accuracy.
In fact, using the classification task of the standard ImageNet data set as the evaluation criterion, problems such as the lack of multiple labels, label noise, and unspecified categories are inevitable.
From the perspective of the crowdsourced annotators tasked with identifying such objects, this is a semantic and even philosophical conundrum that can only be solved through multi-labeling, Therefore, the main improvement in the ImageNet derived data set is the labeling problem.
It has been 16 years since the establishment of ImageNet. The annotators and model developers at that time certainly did not have as rich an understanding of the data as they do today, and ImageNet was an early large-capacity and relatively well-annotated data set, so ImageNet It has naturally become the standard for CV rankings.
But the budget for labeling data is obviously not as large as that for developing models, so the improvement of labeling problems has become a kind of technical debt.
To find the remaining errors in ImageNet, the researchers used a standard ViT-3B model with 3 billion parameters (able to achieve 89.5% accuracy), with JFT-3B as a pre-trained model, and fine-tuned on ImageNet-1K.
Using the ImageNet2012_multilabel data set as the test set, ViT-3B initially achieved an accuracy of 96.3%, in which the model clearly mispredicted 676 images, and then conducted in-depth research on these examples.
When re-labeling the data, the author did not choose crowdsourcing, but formed a team of 5 expert reviewers to perform labeling, because this type of labeling errors are difficult to identify for non-professionals.
For example, in picture (a), ordinary annotators may just write "table", but in fact there are many other objects in the picture, such as screens, monitors, mugs, etc.
The subject of picture (b) is two people, but the label is picket fence (fence), which is obviously imperfect. Possible labels include bow tie, uniform, etc. .
Picture (c) is also an obvious example. If only "African elephant" is marked, the ivory may be ignored.
Picture (d) is labeled lakeshore, but there is actually nothing wrong with labeling it seashore.
In order to increase the efficiency of annotation, the researchers also developed a dedicated tool that can simultaneously display the categories, prediction scores, labels and images predicted by the model.
In some cases, there may still be label disputes between the expert groups, and at this time the images will be put into Google search to assist in labeling.
For example, in one example, the model’s prediction results include taxis, but there is no taxi brand in the picture except for “a little yellow”.
The annotation of this image was mainly found through Google image search that the background of the image is an iconic bridge. Then the researchers located the city where the image is located, and after retrieving taxi images in the city, It is acknowledged that this picture does contain a taxi and not an ordinary car. And a comparison of the license plate design also verified that the model's prediction was correct.
After a preliminary review of the errors discovered during several stages of the research, the authors first divided them into two categories based on their severity:
1. Major: Human Be able to understand the meaning of the label, and the model's predictions have nothing to do with the label;
2. Minor error (Minor): The label may be wrong or incomplete, resulting in prediction errors. Corrections require expert review of the data.
For the 155 major errors made by the ViT-3B model, the researchers found three other models to predict together to increase the diversity of prediction results.
There were 68 major errors that all four models failed to predict. We then analyzed all models' predictions for these examples and verified that none of them were correct. New multi-label, i.e., predictions for each model The results are indeed major errors.
These 68 examples have several common characteristics. The first is that the sota models trained in different ways have made mistakes on this subset, and expert reviewers also believe that the prediction results are completely irrelevant.
The data set of 68 images is also small enough to facilitate manual evaluation by subsequent researchers. If these 68 examples are conquered in the future, the CV model may achieve new breakthroughs.
By analyzing the data, the researchers divided prediction errors into four types:
1. Fine-grained errors, in which the predicted category is similar to the real label, but not exactly the same;
2. Fine-grained with out-of-vocabulary (OOV), where the model identifies a class whose category is correct but does not exist for the object in ImageNet;
3. Spurious correlation, where the predicted label is read from the context of the image;
4. Non-prototype, where the object in the label is similar to the predicted label, but not exactly the same.
After reviewing the original 676 errors, researchers found that 298 of them should have been correct, or it was determined that the original label was wrong or problematic.
In general, four conclusions can be drawn from the research results of the article:
1. When a large-scale, high-precision model makes other When the model does not have new predictions, about 50% of them are correct new multi-labels;
2. The higher accuracy model does not show a clear correlation between category and error severity;
3. Today’s SOTA model performance on human-evaluated multi-label subsets largely matches or exceeds the best expert human performance;
4. Noisy training data and Unspecified classes can be a factor that limits effective measurement of image classification improvements.
Perhaps the image labeling problem still has to wait for natural language processing technology to be solved?
The above is the detailed content of The future of CV is on these 68 pictures? Google Brain takes a deep look at ImageNet: top models all fail to predict. For more information, please follow other related articles on the PHP Chinese website!

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

WebStorm Mac version
Useful JavaScript development tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.