Home  >  Article  >  Technology peripherals  >  ICLR 2024 Oral: Noise correlation learning in long videos, single-card training only takes 1 day

ICLR 2024 Oral: Noise correlation learning in long videos, single-card training only takes 1 day

王林
王林forward
2024-03-05 22:58:13800browse
In a talk at the 2024 World Economic Forum, Turing Award winner Yann LeCun proposed that models used to process videos should learn to make predictions in an abstract representation space, rather than a specific pixel space [1]. Multimodal video representation learning with the help of text information can extract features that are beneficial to video understanding or content generation, which is a key technology to facilitate this process.

# However, the widespread noise correlation phenomenon between current videos and text descriptions seriously hinders video representation learning. Therefore, in this article, researchers propose a robust long video learning solution based on optimal transmission theory to address this challenge. This paper was accepted by ICLR 2024, the top machine learning conference, for Oral.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

  • Paper title: Multi-granularity Correspondence Learning from Long-term Noisy Videos
  • Paper address: https://openreview.net/pdf?id=9Cu8MRmhq2
  • Project address: https://lin-yijie.github.io/projects/Norton
  • Code address: https://github.com/XLearning-SCU/2024-ICLR-Norton

##Background Challenges

#Video representation learning is one of the hottest problems in multimodal research. Large-scale video-language pre-training has achieved remarkable results in a variety of video understanding tasks, such as video retrieval, visual question answering, segment segmentation and localization, etc. At present, most video-language pre-training work is mainly focused on segment understanding of short videos, ignoring the long-term relationships and dependencies that exist in long videos.

As shown in Figure 1 below, the core difficulty in long video learning is how to encode the temporal dynamics in the video. The current solution mainly focuses on designing a customized video network encoder To capture long-term dependencies [2], but usually faces a large resource overhead.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Figure 1: Example of long video data [2]. The video contains a complex storyline and rich temporal dynamics. Each sentence can only describe a short fragment, and understanding the entire video requires long-term correlation reasoning capabilities.

Since long videos usually use automatic language recognition (ASR) to obtain corresponding text subtitles, the text paragraph (Paragraph) corresponding to the entire video can be The ASR text timestamp is divided into multiple short text titles (Caption), and the long video (Video) can be divided into multiple video clips (Clip) accordingly. The strategy of late-stage fusion or alignment of video clips and titles is more efficient than directly encoding the entire video, and is an optimal solution for long-term temporal association learning.

However,
Noisy correspondence [3-4], NC) exists widely between video clips and text sentences, that is, the video content and Text corpora are incorrectly mapped/related to each other. As shown in Figure 2 below, there will be multi-granularity noise correlation problems between video and text.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Figure 2: Multi-granularity noise correlation. In this example, the video content is divided into 6 pieces based on the text title. (Left) A green timeline indicates that the text can be aligned with the content of the video, while a red timeline indicates that the text cannot be aligned with the content of the entire video. The green text in t5 indicates the part related to the video content v5. (Right picture) The dotted line indicates the originally given alignment relationship, the red indicates the incorrect alignment relationship in the original alignment, and the green indicates the true alignment relationship. The solid line represents the result of realignment by the Dynamic Time Wraping algorithm, which also does not handle the noise correlation challenge well.

  • Coarse-grained NC (between Clip-Caption). Coarse-grained NC includes two categories: asynchronous (Asynchronous) and irrelevant (Irrelevant). The difference lies in whether the video clip or title can correspond to an existing title or video clip. "Asynchronous" refers to the timing misalignment between the video clip and the title, such as t1 in Figure 2. This results in a mismatch between the sequence of statements and actions, as the narrator explains before and after the actions are actually performed. "Irrelevant" refers to meaningless titles that cannot be aligned with the video clips (such as t2 and t6), or irrelevant video clips. According to relevant research by the Oxford Visual Geometry Group [5], only about 30% of the video clips and titles in the HowTo100M dataset are visually aligned, and only 15% are originally aligned;
  • Fine-grained NC (Frame-Word). For a video clip, only part of the text description may be relevant to it. In Figure 2, the title t5 "Sprinkle sugar on it" is strongly related to the visual content v5, but the action "Observe the glaze peeling off" is not related to the visual content. Irrelevant words or video frames may hinder the extraction of key information, affecting the alignment between segments and titles.

Method

This paper proposes a noise-robustTiming Optimal Transport (NOise Robust Temporal Optimal transport, Norton), through video-paragraph-level contrastive learning and fragment-title-level contrastive learning, learn video representations from multiple granularities in a post-fusion manner, significantly saving Training time overhead.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

# 图 3 Video -Paragraph Comparison Algorithm Figure.

1) Video - Paragraph Comparison. As shown in Figure 3, researchers use a fine-to-coarse strategy to perform multi-granularity association learning. First, the frame-word correlation is used to obtain the segment-title correlation, and further aggregation is used to obtain the video-paragraph correlation, and finally long-term correlation is captured through video-level contrastive learning. For the multi-granularity noise correlation challenge, the specific response is as follows:

  • Oriented to fine-grained NC. The researchers use log-sum-exp approximation as the soft-maximum operator to identify keywords and key frames in frame-word and word-frame alignment, realize important information extraction in a fine-grained interactive manner, and accumulate segment-title similarities. sex.
  • For coarse-grained asynchronous NC. The researchers used the optimal transmission distance as the distance metric between video clips and titles. Given a video clip-text title similarity matrix ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天, where ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the number of clips and titles, the optimal transmission goal is to maximize the overall alignment similarity, which can naturally handle timing asynchronous or one-to-many (such as t3 Corresponding to v4, v5) complex alignment situation.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Where ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 is uniform distribution giving equal weight to each segment and title, ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 is the transmission assignment or realignment moment, which can be solved by the Sinkhorn algorithm.
  • Oriented to coarse-grained irrelevant NC. Inspired by SuperGlue [6] in feature matching, we design an adaptive alignable hint bucket to try to filter irrelevant segments and titles. The prompt bucket is a vector of the same value in one row and one column, spliced ​​on the similarity matrix ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天, and its value represents the similarity threshold of whether it can be aligned. Tip Buckets integrate seamlessly into the Optimal Transport Sinkhorn solver.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Measuring sequence distance through optimal transmission instead of directly modeling long videos can significantly reduce the amount of calculations. The final video-paragraph loss function is as follows, where ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the similarity matrix between the ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天th long video and the ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天th text paragraph.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

#2) Snippet - Title vs. . This loss ensures the accuracy of segment-to-title alignment in video-paragraph comparisons. Since self-supervised contrastive learning will mistakenly optimize semantically similar samples as negative samples, we use optimal transfer to identify and correct potential false negative samples:

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

whereICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the number of all video clips and titles in the training batch, the identity matrix ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the standard alignment target in the contrastive learning cross-entropy loss, ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the realignment target after incorporating the optimal transmission correction target ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 , ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 is the weight coefficient.

Experiment

This article aims to overcome the noise correlation to improve the model's accuracy on long videos. Comprehension. We verified it through specific tasks such as video retrieval, question and answer, and action segmentation. Some experimental results are as follows.

1) Long video retrieval

The goal of this task is a given text paragraph, Retrieve the corresponding long video. On the YouCookII data set, the researchers tested two scenarios: background retention and background removal, depending on whether to retain text-independent video clips. They use three similarity measurement criteria: Caption Average, DTW and OTAM. Caption Average matches an optimal video clip for each title in the text paragraph, and finally recalls the long video with the largest number of matches. DTW and OTAM accumulate the distance between video and text paragraphs in chronological order. The results are shown in Tables 1 and 2 below.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

                                                                                                                                                                                                                      Table 1, 2 Comparison of long video retrieval performance to be seen on the YouCookII data set

2) Noise correlation robustness analysis

Oxford Visual Geometry Group manually re-annotated the videos in HowTo100M and re-annotated each text title with the correct timestamp . The resulting HTM-Align dataset [5] contains 80 videos and 49K texts. Video retrieval on this data set mainly verifies whether the model overfits the noise correlation, and the results are shown in Table 9 below.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

## 9 9 Table 9 on the HTM-Align Data set analysis of noise association

Summary and Outlook

This article is about noise correlation learning [3][4] - data mismatch/error In-depth continuation of correlation, studying the multi-granularity noise correlation problem faced by multi-modal video-text pre-training, the proposed long video learning method can be extended to a wider range of video data with lower resource overhead.

Looking to the future, researchers can further explore the correlation between multiple modalities. For example, videos often contain visual, text and audio signals; they can try to combine external large language models (LLM) or multi-modal model (BLIP-2) to clean and reorganize text corpus; and explore the possibility of using noise as a positive stimulus for model training, rather than just suppressing the negative impact of noise.

References:
1. This site, “Yann LeCun : Generative models are not suitable for processing videos, AI has to make predictions in abstract space”, 2024-01-23.
2.Sun, Y., Xue , H., Song, R., Liu, B., Yang, H., & Fu, J. (2022). Long-form video-language pre-training with multimodal temporal contrastive learning. Advances in neural information processing systems, 35, 38032-38045.
3.Huang, Z., Niu, G., Liu, X., Ding, W., Xiao, X. , Wu, H., & Peng, X. (2021). Learning with noisy correspondence for cross-modal matching. Advances in Neural Information Processing Systems, 34, 29406-29419.
##4.Lin, Y., Yang, M., Yu, J., Hu, P., Zhang, C., & Peng, X. (2023). Graph matching with bi-level noisy correspondence . In Proceedings of the IEEE/CVF international conference on computer vision.
##5. Han, T., Xie, W., & Zisserman, A. ( 2022). Temporal alignment networks for long-term video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2906-2916).
6.Sarlin, P. E., DeTone, D., Malisiewicz, T., & Rabinovich, A. (2020). Superglue: Learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4938-4947).

The above is the detailed content of ICLR 2024 Oral: Noise correlation learning in long videos, single-card training only takes 1 day. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete