


7B's most powerful long video model! LongVA video understanding exceeds 1,000 frames, dominating multiple lists

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
The main authors of this article are from the LMMs-Lab team and Nanyang Technological University, Singapore. In the joint work, Zhang Peiyuan is a research assistant at Nanyang Technological University, Zhang Kaichen is a fourth-year undergraduate student at Nanyang Technological University, and Li Bo is a third-year doctoral student at Nanyang Technological University. The supervisor is Professor Liu Ziwei from MMLab@NTU. LMMs-Lab is a team composed of students, researchers and teachers, dedicated to the research of multi-modal models. The main research directions include the training and comprehensive evaluation of multi-modal models. Previous work includes the multi-modal evaluation framework lmms- eval etc.
Why is it said that understanding long videos is as difficult as “finding a needle in a haystack”?
A major challenge faced by existing LMMs when processing long videos is the excessive number of visual tokens. For example, LLaVA-1.6 can generate 576 to 2880 visual tokens for a single image. The more frames the video has, the greater the number of tokens. Although BLIP2, LLaMA-VID, Chat-UniVI and other work reduce the number of visual tokens by changing the connection layer between ViT and language model, they still cannot handle a particularly large number of frames.
In addition, the lack of high-quality long video data sets is also a major bottleneck. Existing training datasets are mostly short videos within 1 minute, and even if there are long videos, the annotated text pairs are limited to a few frames of the video, lacking dense supervision signals.
Recently, research teams from LMMs-Lab, Nanyang Technological University and other institutions launched the LongVA long video model, which can understand more than a thousand frames of video data, surpassing the performance of current open source video multi-modal models!
Paper link: https://arxiv.org/abs/2406.16852
Demo address: https://longva-demo.lmms-lab.com/
Code address: https ://github.com/EvolvingLMMs-Lab/LongVA
The author team proposed Long Context Transfer (Long Context Transfer) for the first time in the multi-modal field. This technology enables multi-modal large models (LMMs) to be processed without In the case of long video training, process and understand extremely long videos. Their new model LongVA can process 2000 frames or more than 200,000 visual tokens, achieving a 7B scale SoTA on the video understanding list Video-MME. In the latest long video MLVU list, LongVA is the strongest model after GPT4-o!
The author of LongVA summarized the picture below. It can be seen that the current multi-modal large model is not satisfactory in understanding long videos. The number of frames that can be processed limits the processing and understanding of long videos. In order to process more frames, work such as LLaMA-VID has to drastically compress the number of tokens corresponding to a single frame.
Long context migration
In response to the challenges faced in processing long videos, the research team proposed a new idea of "long context migration". They believe that the current multi-frame bottleneck of large long video models is not in how to extract compressed features from Vision Encoder (Figure (a) below), but in the long context capabilities of the extended model.
They found that by simply extending the context length of the language model on text, they could successfully transfer this ability to the visual modality without any long video training. The specific approach is to first train the language model through long text data, and then use short image data for modal alignment. They found that the model trained in this way can directly understand multi-frame videos during testing, eliminating the need for long video training.
During the long language model training process, the author team used Qwen2-7B-Instruct as the base and extended its text context length to 224K through long context training. During the training process, various optimization strategies such as FlashAttention-2, Ring Attention, activation checkpoint and parameter offload are used to improve training efficiency and memory utilization.
モーダル調整の段階で、著者は画像とビデオを同時に処理するための「UniRes」と呼ばれる統一エンコード方式を設計しました。 UniRes スキームは、LLaVA-1.6 の AnyRes エンコード スキームに似ていますが、ベース画像部分が削除され、各グリッドは 1 次元であり、各グリッド内で 2x2 の特徴プーリングが実行されます。このアプローチにより、画像データをビデオに拡張するときに一貫した表現が維持されます。
LongVA は、「短いコンテキスト トレーニング、長いコンテキスト テスト」の戦略を採用しています。これは、モデルがモーダル アライメント段階でのトレーニングに画像テキスト データのみを使用し、テスト中の処理とテストに長いビデオを直接使用することを意味します。この戦略は、長いコンテキスト転送の現象を効果的に実証し、モデルが長いビデオ トレーニングなしで長いビデオを理解して処理できるようにします。
LongVA のスーパーパフォーマンス
現在、長いビデオの LMM の視覚的なコンテキストの長さを評価するベンチマークはありません。この問題を解決するために、LongVA チームは干し草の中の針テストをテキストからビジュアルに拡張し、Visual Needle-In-A-Haystack (V-NIAH) ベンチマークを提案しました。
V-NIAH テストでは、チームは 5 つの画像の質問と回答の質問を設計し、各質問を 1 つのフレームとして数時間の映画に挿入し、視覚入力として 1 フレーム/秒の頻度でビデオをサンプリングしました。これらの「針」の画像は、モデルが言語知識だけでは質問に回答できないことを保証するために、既存の視覚的な質問回答データセットまたは AI 生成画像から派生しています。各質問には、正しいシステムまたは人間がビデオから「ピン」フレームを見つけて質問に答えることを可能にする「ローカリゼーション ヒント」が含まれています。
V-NIAH テストの結果は、LongVA の視覚的な干し草の山に針を刺すテストが 2000 フレーム (フレームあたり 144 トークン) 以内でほぼ正確であり、3000 フレームのスケールでも良好な精度率を維持していることを示しています。興味深いことに、言語モデルと同様に、LongVA にも V-NIAH である程度の Lost-In-The-Middle 現象があることがわかりました。
Tencent、中国科学技術大学、その他の機関が提案した最近のビデオ MME リストでは、LongVA が 7 位にランクされ、7B モデルの SoTA に到達しました。 /https://video-mme.github.io/home_page.html#leaderboard
著者チームは、論文にいくつかの効果のデモンストレーションも添付しました。
The above is the detailed content of 7B's most powerful long video model! LongVA video understanding exceeds 1,000 frames, dominating multiple lists. For more information, please follow other related articles on the PHP Chinese website!

For those of you who might be new to my column, I broadly explore the latest advances in AI across the board, including topics such as embodied AI, AI reasoning, high-tech breakthroughs in AI, prompt engineering, training of AI, fielding of AI, AI re

Europe's ambitious AI Continent Action Plan aims to establish the EU as a global leader in artificial intelligence. A key element is the creation of a network of AI gigafactories, each housing around 100,000 advanced AI chips – four times the capaci

Microsoft's Unified Approach to AI Agent Applications: A Clear Win for Businesses Microsoft's recent announcement regarding new AI agent capabilities impressed with its clear and unified presentation. Unlike many tech announcements bogged down in te

Shopify CEO Tobi Lütke's recent memo boldly declares AI proficiency a fundamental expectation for every employee, marking a significant cultural shift within the company. This isn't a fleeting trend; it's a new operational paradigm integrated into p

IBM's z17 Mainframe: Integrating AI for Enhanced Business Operations Last month, at IBM's New York headquarters, I received a preview of the z17's capabilities. Building on the z16's success (launched in 2022 and demonstrating sustained revenue grow

Unlock unshakeable confidence and eliminate the need for external validation! These five ChatGPT prompts will guide you towards complete self-reliance and a transformative shift in self-perception. Simply copy, paste, and customize the bracketed in

A recent [study] by Anthropic, an artificial intelligence security and research company, begins to reveal the truth about these complex processes, showing a complexity that is disturbingly similar to our own cognitive domain. Natural intelligence and artificial intelligence may be more similar than we think. Snooping inside: Anthropic Interpretability Study The new findings from the research conducted by Anthropic represent significant advances in the field of mechanistic interpretability, which aims to reverse engineer internal computing of AI—not just observe what AI does, but understand how it does it at the artificial neuron level. Imagine trying to understand the brain by drawing which neurons fire when someone sees a specific object or thinks about a specific idea. A

Qualcomm's Dragonwing: A Strategic Leap into Enterprise and Infrastructure Qualcomm is aggressively expanding its reach beyond mobile, targeting enterprise and infrastructure markets globally with its new Dragonwing brand. This isn't merely a rebran


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Notepad++7.3.1
Easy-to-use and free code editor

Dreamweaver CS6
Visual web development tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool