search
HomeTechnology peripheralsAISupporting 1024 frames and nearly 100% accuracy, NVIDIA 'LongVILA” begins to develop long videos

Now, Long Context Visual Language Model (VLM) has a new full-stack solution - LongVILA, which integrates system, model training and data set development.


At this stage, it is very important to combine the multi-modal understanding of the model with the long context capability. The basic model that supports more modalities can accept more flexible input signals so that people can Diverse ways to interact with models. And longer context allows the model to process more information, such as long documents and long videos. This ability also provides the functionality needed for more real-world applications.

However, the current problem is that some work has enabled long-context visual language models (VLM), but usually in a simplified approach rather than providing a comprehensive solution.

Full-stack design is crucial for long-context visual language models. Training large models is usually a complex and systematic task that requires data engineering and system software co-design. Unlike text-only LLMs, VLMs (e.g., LLaVA) often require unique model architectures and flexible distributed training strategies.

In addition, long context modeling requires not only long context data, but also an infrastructure that can support memory-intensive long context training. Therefore, a well-planned full-stack design (covering system, data, and pipeline) is essential for long-context VLM.

In this article, researchers from NVIDIA, MIT, UC Berkeley, and the University of Texas at Austin introduce LongVILA, a full-stack solution for training and deploying long-context visual language models, including system design, Model training strategy and data set construction.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
  • Paper address: https://arxiv.org/pdf/2408.10188
  • Code address: https://github.com/NVlabs/VILA/blob/main/LongVILA.md
  • Title of the paper: LONGVILA: SCALING LONG-CONTEXT VISUAL LANGUAGE MODELS FOR LONG VIDEOS

For the training infrastructure, the study established an efficient and user-friendly framework, namely Multimodal Sequence Parallel (MM-SP) ), which supports training memory - dense long context VLM.

For the training pipeline, the researchers implemented a five-stage training process, as shown in Figure 1: namely (1) multi-modal alignment, (2) large-scale pre-training, (3) short-supervised fine-tuning, ( 4) contextual extension of LLM, and (5) long-supervised fine-tuning.

For inference, MM-SP solves the challenge of KV cache memory usage, which can become a bottleneck when processing very long sequences.

By using LongVILA to increase the number of video frames, experimental results show that the performance of this study continues to improve on VideoMME and long video subtitle tasks (Figure 2). The LongVILA model trained on 1024 frames achieved 99.5% accuracy in the needle-in-a-haystack experiment of 1400 frames, equivalent to a context length of 274k tokens. In addition, the MM-SP system can effectively extend the context length to 2 million tokens without gradient checkpoints, achieving 2.1x to 5.7x speedup compared to ring sequence parallelism and Megatron context parallelism+ Tensor parallelism achieves 1.1x to 1.4x speedup compared to Tensor Parallel.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
The picture below is an example of LongVILA technology when processing long video subtitles: At the beginning of the subtitles, the 8-frame baseline model only describes a static image and two cars. In comparison, 256 frames of LongVILA depict a car on snow, including front, rear, and side views of the vehicle. In terms of detail, the 256-frame LongVILA also depicts close-ups of the ignition button, gear lever, and instrument cluster, which are missing from the 8-frame baseline model.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
Multi-modal sequence parallelism

Training long-context visual language models (VLM) creates significant memory requirements. For example, in the long video training of Stage 5 in Figure 1 below, a single sequence contains 200K tokens that generate 1024 video frames, which exceeds the memory capacity of a single GPU.

Researchers developed a customized system based on sequence parallelism. Sequential parallelism is a technique commonly used in current base model systems to optimize text-only LLM training. However, researchers found that existing systems are neither efficient nor scalable enough to handle long-context VLM workloads.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
After identifying the limitations of existing systems, the researchers concluded that an ideal multi-modal sequence parallel approach should prioritize efficiency and scalability by addressing modal and network heterogeneity, and Scalability should not be limited by the number of attention heads.

MM-SP workflow. To address the challenge of modal heterogeneity, researchers propose a two-stage sharding strategy to optimize the computational workload in the image encoding and language modeling stages.

As shown in Figure 4 below, the first stage first evenly distributes images (such as video frames) among devices within the sequential parallel process group to achieve load balancing during the image encoding stage. In the second stage, researchers aggregate global visual and textual inputs for token-level sharding.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
2D attention parallelism. In order to solve network heterogeneity and achieve scalability, researchers combine the advantages of Ring sequence parallelism and Ulysses sequence parallelism.

Specifically, they regard parallelism across sequence dimensions or attention head dimensions as "1D SP". The method scales through parallel computation across attention heads and sequence dimensions, converting a 1D SP into a 2D grid composed of independent groups of Ring (P2P) and Ulysses (A2A) processes.

As shown on the left side of Figure 3 below, in order to achieve 8-degree sequence parallelism across 2 nodes, the researcher used 2D-SP to build a 4×2 communication grid.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
In addition, in Figure 5 below, to further explain how ZIGZAG-RINGATTN balances calculations and how the 2D-Attention mechanism operates, the researchers explain the attention calculation plan using different methods.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
Compared with HuggingFace’s native pipeline parallel strategy, the inference mode of this article is more efficient because all devices participate in the calculation at the same time, thereby accelerating the process in proportion to the number of machines, as shown in Figure 6 below. At the same time, this inference mode is scalable, with memory evenly distributed across devices to use more machines to support longer sequences.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
LongVILA training process

As mentioned above, the training process of LongVILA is completed in 5 stages. The main tasks of each stage are as follows:

In Stage 1, only the multi-modal mapper can be trained, and other mappers are frozen.

In Stage 2, the researchers froze the visual encoder and trained the LLM and multi-modal mapper.

In Stage 3, researchers comprehensively fine-tune the model for short data instruction following tasks, such as using image and short video data sets.

In Stage 4, researchers use text-only datasets to extend the context length of LLM in a continuous pre-training manner.

In Stage 5, researchers use long video supervision to fine-tune to enhance instruction following ability. It is worth noting that all parameters are trainable in this stage.

Experimental results

The researchers evaluated the full-stack solution in this article from two aspects: system and modeling. They first present training and inference results, illustrating the efficiency and scalability of a system that can support long-context training and inference. We then evaluate the performance of the long context model on captioning and instruction following tasks.

Training and Inference System

This study provides a quantitative evaluation of the throughput of the training system, the latency of the inference system, and the maximum sequence length supported.

Table 2 shows the throughput results. Compared with ZIGZAG-RINGATTN, this system achieves an acceleration of 2.1 times to 5.7 times, and the performance is comparable to DeepSpeed-Ulysses. A speedup of 3.1x to 4.3x is achieved compared to the more optimized ring sequence parallel implementation in Megatron-LM CP.
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
この調査では、メモリ不足エラーが発生するまでシーケンス長を 1k から 10k まで徐々に増加させて、固定数の GPU でサポートされる最大シーケンス長を評価します。結果を図 9 にまとめます。

256 GPU にスケールすると、私たちのメソッドはコンテキスト長の約 8 倍をサポートできます。さらに、提案されたシステムは、ZIGZAG-RINGATTN と同様のコンテキスト長スケーリングを実現し、256 個の GPU で 200 万を超えるコンテキスト長をサポートします。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
表 3 はサポートされるシーケンスの最大長を比較しており、この研究で提案された方法は HuggingFace Pipeline でサポートされるシーケンスよりも 2.9 倍長いシーケンスをサポートします。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
図 11 は、干し草の山の実験における長いビデオ ニードルの結果を示しています。対照的に、LongVILA モデル (右) は、さまざまなフレーム番号と深度にわたってパフォーマンスが向上しています。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
表 5 は、ビデオ MME ベンチマークにおけるさまざまなモデルのパフォーマンスをリストし、短い、中程度、長いビデオの長さでの有効性と全体的なパフォーマンスを比較しています。 LongVILA-8B は 256 フレームを使用し、総合スコアは 50.5 です。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
研究者らは、表 6 のステージ 3 と 4 の影響に関するアブレーション研究も実施しました。
Supporting 1024 frames and nearly 100% accuracy, NVIDIA LongVILA” begins to develop long videos
表 7 は、さまざまなフレーム数 (8、128、256) でトレーニングおよび評価された LongVILA モデルのパフォーマンス メトリクスを示しています。フレーム数が増えると、モデルのパフォーマンスが大幅に向上します。具体的には、平均スコアが 2.00 から 3.26 に増加し、より多くのフレームで正確で豊富な字幕を生成するモデルの能力が強調されました。

The above is the detailed content of Supporting 1024 frames and nearly 100% accuracy, NVIDIA 'LongVILA” begins to develop long videos. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
I can't use the ChatGPT plugin function! Explaining what to do in case of an errorI can't use the ChatGPT plugin function! Explaining what to do in case of an errorMay 14, 2025 am 01:56 AM

ChatGPT plugin cannot be used? This guide will help you solve your problem! Have you ever encountered a situation where the ChatGPT plugin is unavailable or suddenly fails? The ChatGPT plugin is a powerful tool to enhance the user experience, but sometimes it can fail. This article will analyze in detail the reasons why the ChatGPT plug-in cannot work properly and provide corresponding solutions. From user setup checks to server troubleshooting, we cover a variety of troubleshooting solutions to help you efficiently use plug-ins to complete daily tasks. OpenAI Deep Research, the latest AI agent released by OpenAI. For details, please click ⬇️ [ChatGPT] OpenAI Deep Research Detailed explanation:

Does ChatGPT not follow the character count specification? A thorough explanation of how to deal with this!Does ChatGPT not follow the character count specification? A thorough explanation of how to deal with this!May 14, 2025 am 01:54 AM

When writing a sentence using ChatGPT, there are times when you want to specify the number of characters. However, it is difficult to accurately predict the length of sentences generated by AI, and it is not easy to match the specified number of characters. In this article, we will explain how to create a sentence with the number of characters in ChatGPT. We will introduce effective prompt writing, techniques for getting answers that suit your purpose, and teach you tips for dealing with character limits. In addition, we will explain why ChatGPT is not good at specifying the number of characters and how it works, as well as points to be careful about and countermeasures. This article

All About Slicing Operations in PythonAll About Slicing Operations in PythonMay 14, 2025 am 01:48 AM

For every Python programmer, whether in the domain of data science and machine learning or software development, Python slicing operations are one of the most efficient, versatile, and powerful operations. Python slicing syntax a

An easy-to-understand explanation of how to use ChatGPT to create quotes!An easy-to-understand explanation of how to use ChatGPT to create quotes!May 14, 2025 am 01:44 AM

The evolution of AI technology has accelerated business efficiency. What's particularly attracting attention is the creation of estimates using AI. OpenAI's AI assistant, ChatGPT, contributes to improving the estimate creation process and improving accuracy. This article explains how to create a quote using ChatGPT. We will introduce efficiency improvements through collaboration with Excel VBA, specific examples of application to system development projects, benefits of AI implementation, and future prospects. Learn how to improve operational efficiency and productivity with ChatGPT. Op

What is ChatGPT Pro (o1 Pro)? Explaining what you can do, the prices, and the differences between them from other plans!What is ChatGPT Pro (o1 Pro)? Explaining what you can do, the prices, and the differences between them from other plans!May 14, 2025 am 01:40 AM

OpenAI's latest subscription plan, ChatGPT Pro, provides advanced AI problem resolution! In December 2024, OpenAI announced its top-of-the-line plan, the ChatGPT Pro, which costs $200 a month. In this article, we will explain its features, particularly the performance of the "o1 pro mode" and new initiatives from OpenAI. This is a must-read for researchers, engineers, and professionals aiming to utilize advanced AI. ChatGPT Pro: Unleash advanced AI power ChatGPT Pro is the latest and most advanced product from OpenAI.

We explain how to create and correct your motivation for applying using ChatGPT! Also introduce the promptWe explain how to create and correct your motivation for applying using ChatGPT! Also introduce the promptMay 14, 2025 am 01:29 AM

It is well known that the importance of motivation for applying when looking for a job is well known, but I'm sure there are many job seekers who struggle to create it. In this article, we will introduce effective ways to create a motivation statement using the latest AI technology, ChatGPT. We will carefully explain the specific steps to complete your motivation, including the importance of self-analysis and corporate research, points to note when using AI, and how to match your experience and skills with company needs. Through this article, learn the skills to create compelling motivation and aim for successful job hunting! OpenAI's latest AI agent, "Open

What's so amazing about ChatGPT? A thorough explanation of its features and strengths!What's so amazing about ChatGPT? A thorough explanation of its features and strengths!May 14, 2025 am 01:26 AM

ChatGPT: Amazing Natural Language Processing AI and how to use it ChatGPT is an innovative natural language processing AI model developed by OpenAI. It is attracting attention around the world as an advanced tool that enables natural dialogue with humans and can be used in a variety of fields. Its excellent language comprehension, vast knowledge, learning ability and flexible operability have the potential to transform our lives and businesses. In this article, we will explain the main features of ChatGPT and specific examples of use, and explore the possibilities for the future that AI will unlock. Unraveling the possibilities and appeal of ChatGPT, and enjoying life and business

[Images generated using AI] How to make and print Bikkuriman chocolate-style stickers with ChatGPT[Images generated using AI] How to make and print Bikkuriman chocolate-style stickers with ChatGPTMay 14, 2025 am 01:16 AM

Release childhood memories! Create your exclusive stickers with ChatGPT! Do you remember the fun of collecting stickers from childhood? Nowadays, with the powerful image generation capabilities of ChatGPT, you can easily create unique characters in style without drawing skills! This article will teach you step by step how to transform photos or illustrations into shiny stickers full of nostalgia using ChatGPT. We will explain everything from detailed tip word examples to sticker making and printing steps, creative ideas shared on social media, and even copyright and portrait rights. Table of contents Why can ChatGPT make pictures of the wind? ChatGPT image generation successfully

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!