search
HomeTechnology peripheralsAIIn addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

The core goal of video understanding is to accurately understand spatiotemporal representation, but it faces two main challenges: there is a large amount of spatiotemporal redundancy in short video clips, and complex spatiotemporal dependencies. Three-dimensional convolutional neural networks (CNN) and video transformers have performed well in solving one of these challenges, but they have certain shortcomings in addressing both challenges simultaneously. UniFormer attempts to combine the advantages of both approaches, but encounters difficulties in modeling long videos.

The emergence of low-cost solutions such as S4, RWKV and RetNet in the field of natural language processing has opened up new avenues for visual models. Mamba stands out with its Selective State Space Model (SSM), which achieves a balance of maintaining linear complexity while facilitating long-term dynamic modeling. This innovation drives its application in vision tasks, as demonstrated by Vision Mamba and VMamba, which exploit multi-directional SSM to enhance 2D image processing. These models are comparable in performance to attention-based architectures while significantly reducing memory usage.

Given that the sequences produced by videos are inherently longer, a natural question is: does Mamba work well for video understanding?

Inspired by Mamba, this article introduces VideoMamba, an SSM (Selective State Space Model) specifically tailored for video understanding. VideoMamba draws on the design philosophy of Vanilla ViT and combines convolution and attention mechanisms. It provides a linear complexity method for dynamic spatiotemporal background modeling, especially suitable for processing high-resolution long videos. The evaluation mainly focuses on four key capabilities of VideoMamba:

Scalability in the visual field: This article evaluates the scalability of VideoMamba The performance was tested and found that the pure Mamba model is often prone to overfitting when it continues to expand. This paper introduces a simple and effective self-distillation strategy, so that as the model and input size increase, VideoMamba can be used without the need for large-scale data sets. Achieve significant performance enhancements without pre-training.

Sensitivity to short-term action recognition: The analysis in this paper is extended to evaluate VideoMamba’s ability to accurately distinguish short-term actions, especially those with Actions with subtle motion differences, such as opening and closing. Research results show that VideoMamba exhibits excellent performance over existing attention-based models. More importantly, it is also suitable for mask modeling, further enhancing its temporal sensitivity.

Superiority in long video understanding: This article evaluates VideoMamba’s ability to interpret long videos. With end-to-end training, it demonstrates significant advantages over traditional feature-based methods. Notably, VideoMamba runs 6x faster than TimeSformer on 64-frame video and requires 40x less GPU memory (shown in Figure 1).

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

Compatibility with other modalities: Finally, this article evaluates the adaptability of VideoMamba with other modalities. Results in video text retrieval show improved performance compared to ViT, especially in long videos with complex scenarios. This highlights its robustness and multimodal integration capabilities.

In-depth experiments in this study reveal VideoMamba’s great potential for short-term (K400 and SthSthV2) and long-term (Breakfast, COIN and LVU) video content understanding. VideoMamba demonstrates high efficiency and accuracy, indicating that it will become a key component in the field of long video understanding. To facilitate future research, all code and models have been made open source.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology


  • Paper address: https://arxiv.org/pdf/2403.06977.pdf
  • Project address: https://github.com/OpenGVLab/VideoMamba
  • Paper title: VideoMamba: State Space Model for Efficient Video Understanding

Method introduction

Figure 2a below shows the details of the Mamba module.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology


Figure 3 illustrates the overall framework of VideoMamba. This paper first uses 3D convolution (i.e. 1×16×16) to project the input video Xv ∈ R 3×T ×H×W into L non-overlapping spatio-temporal patches Xp ∈ R L×C, where L=t×h×w (t=T, h= H 16, and w= W 16). The token sequence input to the next VideoMamba encoder is In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

Spatiotemporal scanning: To apply the B-Mamba layer to the spatiotemporal input, In Figure 4 of this article, the original 2D scan is expanded into different bidirectional 3D scans:

(a) Spatial first, organize spatial tokens by position, and then stack them frame by frame;

(b) Time priority, arrange time tokens according to frames, and then stack them along the spatial dimension;

(c) Space-time mixing, both space priority and There is time priority, where v1 executes half of it and v2 executes all (2 times the calculation amount).

The experiments in Figure 7a show that space-first bidirectional scanning is the most efficient yet simplest. Due to Mamba's linear complexity, VideoMamba in this article can efficiently process high-resolution long videos.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

For SSM in the B-Mamba layer, this article uses the same default hyperparameter settings as Mamba, setting the state dimension and expansion ratio to 16 and 2 respectively. Following ViT's approach, this paper adjusts the depth and embedding dimensions to create models of comparable size to those in Table 1, including VideoMamba-Ti, VideoMamba-S and VideoMamba-M. However, it was observed in experiments that larger VideoMamba is often prone to overfitting in experiments, resulting in suboptimal performance as shown in Figure 6a. This overfitting problem exists not only in the model proposed in this paper, but also in VMamba, where the best performance of VMamba-B is achieved at three-quarters of the total training period. To combat the overfitting problem of larger Mamba models, this paper introduces an effective self-distillation strategy that uses smaller and well-trained models as "teachers" to guide the training of larger "student" models. The results shown in Figure 6a show that this strategy leads to the expected better convergence.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

Regarding the masking strategy, this article proposes different row masking techniques, as shown in Figure 5 , specifically for the B-Mamba block's preference for consecutive tokens.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

Experiment

Table 2 shows the results on the ImageNet-1K dataset. Notably, VideoMamba-M significantly outperforms other isotropic architectures, improving by 0.8% compared to ConvNeXt-B and 2.0% compared to DeiT-B, while using fewer parameters. VideoMamba-M also performs well in a non-isotropic backbone structure that employs layered features for enhanced performance. Given Mamba's efficiency in processing long sequences, this paper further improves performance by increasing the resolution, achieving 84.0% top-1 accuracy using only 74M parameters.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

Table 3 and Table 4 list the results on the short-term video dataset. (a) Supervised learning: Compared with pure attention methods, VideoMamba-M based on SSM gained obvious advantages, outperforming ViViT-L on the scene-related K400 and time-related Sth-SthV2 datasets respectively. 2.0% and 3.0%. This improvement comes with significantly reduced computational requirements and less pre-training data. VideoMamba-M's results are on par with SOTA UniFormer, which cleverly integrates convolution and attention in a non-isotropic architecture. (b) Self-supervised learning: With mask pre-training, VideoMamba outperforms VideoMAE, which is known for its fine motor skills. This achievement highlights the potential of our pure SSM-based model to understand short-term videos efficiently and effectively, emphasizing its suitability for both supervised and self-supervised learning paradigms.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

As shown in Figure 1, VideoMamba’s linear complexity makes it very suitable for end-to-end training with long videos. . The comparison in Tables 6 and 7 highlights the simplicity and effectiveness of VideoMamba over traditional feature-based methods in these tasks. It brings significant performance improvements, enabling SOTA results even at smaller model sizes. VideoMamba-Ti shows a significant 6.1% improvement over ViS4mer using Swin-B features, and also a 3.0% improvement over Turbo's multi-modal alignment method. Notably, the results highlight the positive impact of scaling models and frame rates for long-term tasks. On nine diverse and challenging tasks proposed by LVU, this paper adopts an end-to-end approach to fine-tune VideoMamba-Ti and achieves results that are comparable to or superior to current SOTA methods. These results not only highlight the effectiveness of VideoMamba, but also demonstrate its great potential for future long video understanding.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

As shown in Table 8, under the same pre-training corpus and similar training strategy, VideoMamba It is better than ViT-based UMT in zero-sample video retrieval performance. This highlights Mamba's comparable efficiency and scalability compared to ViT in processing multi-modal video tasks. Notably, VideoMamba shows significant improvements on datasets with longer video lengths (e.g., ANet and DiDeMo) and more complex scenarios (e.g., LSMDC). This demonstrates Mamba's capabilities in challenging multimodal environments, even where cross-modal alignment is required.

In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology

For more research details, please refer to the original paper.

The above is the detailed content of In addition to CNN, Transformer, and Uniformer, we finally have more efficient video understanding technology. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Most Used 10 Power BI Charts - Analytics VidhyaMost Used 10 Power BI Charts - Analytics VidhyaApr 16, 2025 pm 12:05 PM

Harnessing the Power of Data Visualization with Microsoft Power BI Charts In today's data-driven world, effectively communicating complex information to non-technical audiences is crucial. Data visualization bridges this gap, transforming raw data i

Expert Systems in AIExpert Systems in AIApr 16, 2025 pm 12:00 PM

Expert Systems: A Deep Dive into AI's Decision-Making Power Imagine having access to expert advice on anything, from medical diagnoses to financial planning. That's the power of expert systems in artificial intelligence. These systems mimic the pro

Three Of The Best Vibe Coders Break Down This AI Revolution In CodeThree Of The Best Vibe Coders Break Down This AI Revolution In CodeApr 16, 2025 am 11:58 AM

First of all, it’s apparent that this is happening quickly. Various companies are talking about the proportions of their code that are currently written by AI, and these are increasing at a rapid clip. There’s a lot of job displacement already around

Runway AI's Gen-4: How Can AI Montage Go Beyond AbsurdityRunway AI's Gen-4: How Can AI Montage Go Beyond AbsurdityApr 16, 2025 am 11:45 AM

The film industry, alongside all creative sectors, from digital marketing to social media, stands at a technological crossroad. As artificial intelligence begins to reshape every aspect of visual storytelling and change the landscape of entertainment

How to Enroll for 5 Days ISRO AI Free Courses? - Analytics VidhyaHow to Enroll for 5 Days ISRO AI Free Courses? - Analytics VidhyaApr 16, 2025 am 11:43 AM

ISRO's Free AI/ML Online Course: A Gateway to Geospatial Technology Innovation The Indian Space Research Organisation (ISRO), through its Indian Institute of Remote Sensing (IIRS), is offering a fantastic opportunity for students and professionals to

Local Search Algorithms in AILocal Search Algorithms in AIApr 16, 2025 am 11:40 AM

Local Search Algorithms: A Comprehensive Guide Planning a large-scale event requires efficient workload distribution. When traditional approaches fail, local search algorithms offer a powerful solution. This article explores hill climbing and simul

OpenAI Shifts Focus With GPT-4.1, Prioritizes Coding And Cost EfficiencyOpenAI Shifts Focus With GPT-4.1, Prioritizes Coding And Cost EfficiencyApr 16, 2025 am 11:37 AM

The release includes three distinct models, GPT-4.1, GPT-4.1 mini and GPT-4.1 nano, signaling a move toward task-specific optimizations within the large language model landscape. These models are not immediately replacing user-facing interfaces like

The Prompt: ChatGPT Generates Fake PassportsThe Prompt: ChatGPT Generates Fake PassportsApr 16, 2025 am 11:35 AM

Chip giant Nvidia said on Monday it will start manufacturing AI supercomputers— machines that can process copious amounts of data and run complex algorithms— entirely within the U.S. for the first time. The announcement comes after President Trump si

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Chat Commands and How to Use Them
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.