Home  >  Article  >  Technology peripherals  >  ICLR 2024 Spotlight | Negative label mining facilitates CLIP-based out-of-distribution detection tasks

ICLR 2024 Spotlight | Negative label mining facilitates CLIP-based out-of-distribution detection tasks

PHPz
PHPzforward
2024-05-06 18:04:24971browse

As machine learning models are increasingly used in open-world scenarios, how to effectively identify and process out-of-distribution (OOD) data has become an important research area. The presence of out-of-distribution data can lead to model overconfidence and incorrect predictions, which is particularly dangerous in safety-critical applications such as autonomous driving and medical diagnostics. Therefore, developing an effective OOD detection mechanism is crucial to improving the safety and reliability of the model in practical applications.

Traditional OOD detection methods mainly focus on a single pattern, especially image data, while ignoring other potentially useful information sources, such as text data. With the rise of visual-language models (VLMs), they have demonstrated strong performance in multi-modal learning scenarios, especially in tasks that require simultaneous understanding of images and related text descriptions. Existing OOD detection methods based on VLMs [3, 4, 5] only use the semantic information of ID tags, ignoring the powerful zero-sample capability of the VLMs model and the very broad semantic space that VLMs can interpret. Based on this, we believe that VLMs have huge untapped potential in OOD detection, especially that they can comprehensively utilize image and text information to improve detection results.

This article revolves around three questions:

1. Is the information of non-ID tags helpful for zero-sample OOD detection?

2. How to mine information that is beneficial to zero-sample OOD detection?

3. How to use the mined information for zero-sample OOD detection?

In this project, we propose an innovative approach called NegLabel that utilizes VLMs for OOD detection. The NegLabel method specifically introduces a "negative label" mechanism. These negative labels have significant semantic differences with known ID category labels. By analyzing and comparing the affinity and nature of the image and ID labels and negative labels, NegLabel can effectively distinguish distributions belonging to samples outside the model, thereby significantly enhancing the model's ability to identify OOD samples.

NegLabel has achieved superior performance in multiple zero-shot OOD detection benchmark tests. It can achieve 94.21% AUROC and 25.40% FPR95 on large-scale data sets such as ImageNet-1k. Compared with OOD detection methods based on VLMs, NegLabel not only does not require additional training processes, but also shows superior performance. In addition, NegLabel shows excellent versatility and robustness on different VLM architectures.

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

ØPaper link: https://arxiv.org/pdf/2403.20078.pdf

ØCode link: https://github.com/ tmlr-group/NegLabel

Next, we will briefly share with you our research results on the direction of out-of-distribution detection recently published at ICLR 2024.

Preliminary knowledge

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

Method introduction

The core of NegLabel is the introduction of the "negative label" mechanism. These negative labels are related to known IDs Category labels have significant semantic differences. By analyzing and comparing the affinity of images with ID labels and negative labels, NegLabel can effectively distinguish samples that belong to out-of-distribution, thereby significantly enhancing the model's ability to identify OOD samples.

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

Figure 1. Overview of NegLabel

1. How to select a negative label?

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

2. How to use negative labels for OOD detection?

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

3. How to understand that negative samples can promote zero-sample OOD detection?

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

Experimental results

Our research work provides multi-dimensional experimental results to understand the performance and underlying mechanism of our proposed method.

As shown in the table below, compared with many benchmark methods and advanced methods with excellent performance, the method proposed in this article can achieve better out-of-distribution detection results on large-scale data sets (such as ImageNet). .

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

In addition, as shown in the table below, the method in this article has better robustness when ID data undergoes domain migration.

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

In the following two tables, we conducted ablation experiments on each module of NegLabel and the structure of VLMs. As can be seen from the table on the left, both the NegMining algorithm and the Grouping strategy can effectively improve the performance of OOD detection. The table on the right shows that the NegLabel algorithm we proposed has good adaptability to VLMs of different structures.

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

We also performed a visual analysis of the affinity of different input images for ID tags and negative tags. For more detailed experiments and results, please refer to the original article.

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

ICLR 2024 Spotlight | 负标签挖掘助力基于CLIP的分布外检测任务

References

[1] Hendrycks, D. and Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017.

[2] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021.

[3] Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. Zero-shot out-of -distribution detection based on the pre-trained model clip. In AAAI, 2022.

[4] Yifei Ming, Ziyang Cai, Jiuxiang Gu, Yiyou Sun, Wei Li, and Yixuan Li. Delving into out-ofdistribution detection with vision-language representations. In NeurIPS, 2022a.

[5] Hualiang Wang, Yi Li, Huifeng Yao, and Xiaomeng Li. Clipn for zero-shot ood detection: Teaching clip to say no. ICCV, 2023.

[6] Christiane Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998.

The above is the detailed content of ICLR 2024 Spotlight | Negative label mining facilitates CLIP-based out-of-distribution detection tasks. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete