


7B's most powerful long video model! LongVA video understanding exceeds 1,000 frames, dominating multiple lists

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
The main authors of this article are from the LMMs-Lab team and Nanyang Technological University, Singapore. In the joint work, Zhang Peiyuan is a research assistant at Nanyang Technological University, Zhang Kaichen is a fourth-year undergraduate student at Nanyang Technological University, and Li Bo is a third-year doctoral student at Nanyang Technological University. The supervisor is Professor Liu Ziwei from MMLab@NTU. LMMs-Lab is a team composed of students, researchers and teachers, dedicated to the research of multi-modal models. The main research directions include the training and comprehensive evaluation of multi-modal models. Previous work includes the multi-modal evaluation framework lmms- eval etc.
Why is it said that understanding long videos is as difficult as “finding a needle in a haystack”?
A major challenge faced by existing LMMs when processing long videos is the excessive number of visual tokens. For example, LLaVA-1.6 can generate 576 to 2880 visual tokens for a single image. The more frames the video has, the greater the number of tokens. Although BLIP2, LLaMA-VID, Chat-UniVI and other work reduce the number of visual tokens by changing the connection layer between ViT and language model, they still cannot handle a particularly large number of frames.
In addition, the lack of high-quality long video data sets is also a major bottleneck. Existing training datasets are mostly short videos within 1 minute, and even if there are long videos, the annotated text pairs are limited to a few frames of the video, lacking dense supervision signals.
Recently, research teams from LMMs-Lab, Nanyang Technological University and other institutions launched the LongVA long video model, which can understand more than a thousand frames of video data, surpassing the performance of current open source video multi-modal models!
Paper link: https://arxiv.org/abs/2406.16852
Demo address: https://longva-demo.lmms-lab.com/
Code address: https ://github.com/EvolvingLMMs-Lab/LongVA
The author team proposed Long Context Transfer (Long Context Transfer) for the first time in the multi-modal field. This technology enables multi-modal large models (LMMs) to be processed without In the case of long video training, process and understand extremely long videos. Their new model LongVA can process 2000 frames or more than 200,000 visual tokens, achieving a 7B scale SoTA on the video understanding list Video-MME. In the latest long video MLVU list, LongVA is the strongest model after GPT4-o!
The author of LongVA summarized the picture below. It can be seen that the current multi-modal large model is not satisfactory in understanding long videos. The number of frames that can be processed limits the processing and understanding of long videos. In order to process more frames, work such as LLaMA-VID has to drastically compress the number of tokens corresponding to a single frame.
Long context migration
In response to the challenges faced in processing long videos, the research team proposed a new idea of "long context migration". They believe that the current multi-frame bottleneck of large long video models is not in how to extract compressed features from Vision Encoder (Figure (a) below), but in the long context capabilities of the extended model.
They found that by simply extending the context length of the language model on text, they could successfully transfer this ability to the visual modality without any long video training. The specific approach is to first train the language model through long text data, and then use short image data for modal alignment. They found that the model trained in this way can directly understand multi-frame videos during testing, eliminating the need for long video training.
During the long language model training process, the author team used Qwen2-7B-Instruct as the base and extended its text context length to 224K through long context training. During the training process, various optimization strategies such as FlashAttention-2, Ring Attention, activation checkpoint and parameter offload are used to improve training efficiency and memory utilization.
モーダル調整の段階で、著者は画像とビデオを同時に処理するための「UniRes」と呼ばれる統一エンコード方式を設計しました。 UniRes スキームは、LLaVA-1.6 の AnyRes エンコード スキームに似ていますが、ベース画像部分が削除され、各グリッドは 1 次元であり、各グリッド内で 2x2 の特徴プーリングが実行されます。このアプローチにより、画像データをビデオに拡張するときに一貫した表現が維持されます。
LongVA は、「短いコンテキスト トレーニング、長いコンテキスト テスト」の戦略を採用しています。これは、モデルがモーダル アライメント段階でのトレーニングに画像テキスト データのみを使用し、テスト中の処理とテストに長いビデオを直接使用することを意味します。この戦略は、長いコンテキスト転送の現象を効果的に実証し、モデルが長いビデオ トレーニングなしで長いビデオを理解して処理できるようにします。
LongVA のスーパーパフォーマンス
現在、長いビデオの LMM の視覚的なコンテキストの長さを評価するベンチマークはありません。この問題を解決するために、LongVA チームは干し草の中の針テストをテキストからビジュアルに拡張し、Visual Needle-In-A-Haystack (V-NIAH) ベンチマークを提案しました。
V-NIAH テストでは、チームは 5 つの画像の質問と回答の質問を設計し、各質問を 1 つのフレームとして数時間の映画に挿入し、視覚入力として 1 フレーム/秒の頻度でビデオをサンプリングしました。これらの「針」の画像は、モデルが言語知識だけでは質問に回答できないことを保証するために、既存の視覚的な質問回答データセットまたは AI 生成画像から派生しています。各質問には、正しいシステムまたは人間がビデオから「ピン」フレームを見つけて質問に答えることを可能にする「ローカリゼーション ヒント」が含まれています。
V-NIAH テストの結果は、LongVA の視覚的な干し草の山に針を刺すテストが 2000 フレーム (フレームあたり 144 トークン) 以内でほぼ正確であり、3000 フレームのスケールでも良好な精度率を維持していることを示しています。興味深いことに、言語モデルと同様に、LongVA にも V-NIAH である程度の Lost-In-The-Middle 現象があることがわかりました。
Tencent、中国科学技術大学、その他の機関が提案した最近のビデオ MME リストでは、LongVA が 7 位にランクされ、7B モデルの SoTA に到達しました。 /https://video-mme.github.io/home_page.html#leaderboard
著者チームは、論文にいくつかの効果のデモンストレーションも添付しました。
The above is the detailed content of 7B's most powerful long video model! LongVA video understanding exceeds 1,000 frames, dominating multiple lists. For more information, please follow other related articles on the PHP Chinese website!

The 2025 Artificial Intelligence Index Report released by the Stanford University Institute for Human-Oriented Artificial Intelligence provides a good overview of the ongoing artificial intelligence revolution. Let’s interpret it in four simple concepts: cognition (understand what is happening), appreciation (seeing benefits), acceptance (face challenges), and responsibility (find our responsibilities). Cognition: Artificial intelligence is everywhere and is developing rapidly We need to be keenly aware of how quickly artificial intelligence is developing and spreading. Artificial intelligence systems are constantly improving, achieving excellent results in math and complex thinking tests, and just a year ago they failed miserably in these tests. Imagine AI solving complex coding problems or graduate-level scientific problems – since 2023

Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le

The comforting illusion of connection: Are we truly flourishing in our relationships with AI? This question challenged the optimistic tone of MIT Media Lab's "Advancing Humans with AI (AHA)" symposium. While the event showcased cutting-edg

Introduction Imagine you're a scientist or engineer tackling complex problems – differential equations, optimization challenges, or Fourier analysis. Python's ease of use and graphics capabilities are appealing, but these tasks demand powerful tools

Meta's Llama 3.2: A Multimodal AI Powerhouse Meta's latest multimodal model, Llama 3.2, represents a significant advancement in AI, boasting enhanced language comprehension, improved accuracy, and superior text generation capabilities. Its ability t

Data Quality Assurance: Automating Checks with Dagster and Great Expectations Maintaining high data quality is critical for data-driven businesses. As data volumes and sources increase, manual quality control becomes inefficient and prone to errors.

Mainframes: The Unsung Heroes of the AI Revolution While servers excel at general-purpose applications and handling multiple clients, mainframes are built for high-volume, mission-critical tasks. These powerful systems are frequently found in heavil


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

SublimeText3 English version
Recommended: Win version, supports code prompts!

Zend Studio 13.0.1
Powerful PHP integrated development environment

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Mac version
God-level code editing software (SublimeText3)