search
HomeTechnology peripheralsAIAdd fast and slow eyes to the video model, Apple's new training-free method surpasses everything SOTA in seconds

Since the release of Sora, the field of AI video generation has become more "busy". In the past few months, we have witnessed Jimeng, Runway Gen-3, Luma AI, and Kuaishou Keling taking turns to explode.

Unlike the past models that can be identified as AI-generated at a glance, this batch of large video models may be the "best" we have ever seen.

However, behind the amazing performance of the video large language model (LLM) is a huge and finely annotated video data set, which requires a very high cost. Recently, a number of innovative methods have emerged in the research field that do not require additional training: using trained image large language models to directly process video tasks, thus bypassing the "expensive" training process.

In addition, most existing video LLMs suffer from two major disadvantages: (1) they can only handle video input with a limited number of frames, which makes it difficult for the model to capture the subtle spatial and temporal content in the video; (2) they It lacks temporal modeling design, but simply inputs video features into LLM, completely relying on LLM's ability to model motion.

In response to the above problems, Apple researchers proposed SlowFast-LLaVA (SF-LLaVA for short). This model is based on the LLaVA-NeXT architecture developed by the Byte team. It requires no additional fine-tuning and can be used out of the box. Inspired by the successful two-stream network in the field of action recognition, the research team designed a novel SlowFast input mechanism for video LLM.

Simply put, SF-LLaVA will understand the details and motion in the video through two different observation speeds (Slow and Fast).

  • Slow path: extract features at low frame rates while retaining as much spatial detail as possible (e.g. retain 24×24 tokens every 8 frames)
  • Fast path: run at high frame rates, but Use a larger spatial pooling step size to reduce the resolution of the video to simulate a larger temporal context and focus more on understanding the coherence of actions

This is equivalent to the model having two "eyes": one Just look slowly and pay attention to the details; the other one is to look quickly and pay attention to the movements. This solves the pain points of most existing video LLMs and can capture both detailed spatial semantics and longer temporal context.

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

Paper link: https://arxiv.org/pdf/2407.15841

Experimental results show that SF-LLaVA surpasses existing training-free methods by significant advantages in all benchmark tests. Compared with carefully fine-tuned SFT models, SF-LLaVA achieves the same performance or even better.

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

Model architecture

As shown in the figure below, SF-LLaVA follows the standard training-free video LLM process. It takes a video V and a question Q as input and outputs the corresponding answer A.

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

對於輸入,要從每個視訊任意大小和長度中均勻取樣 N 幀,I = {I_1, I_2, ..., I_N},不需要對選取的視訊幀進行特別的組合或排列。以幀為單位視獨立提取頻特徵為 F_v ∈ R^N×H×W,其中 H 和 W 分別為幀特徵的高度和寬度。

下一步需要在慢速和快速兩個路徑中進一步處理 F_v,並將它們結合起來作為有效的視頻表示。慢速路徑從 F_v 中均勻取樣Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds的幀特徵,其中Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

先前有研究發現,在空間維度上適當池化可以提高影片產生的效率和穩健性。因此,研究團隊在 F_v 上應用步長為 σ_h×σ_w 的池化過程,得到最終特徵:Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds,其中Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in secondsAdd fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds。慢速路徑的整個過程如公式 2 所示。

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

快速路徑保留 F_v 中的所有幀特徵,以盡可能多地捕捉視訊的長程時間上下文。具體來說,研究團隊使用空間池化步長Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds對 F_v 進行激進的下取樣,得到最終特徵Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds。研究團隊設定Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in secondsAdd fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds,讓快速路徑能專注於模擬時間脈絡和運動線索。慢速路徑的整個過程如公式 3 所示。

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

最後,得到聚合的視訊特徵:Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds,其中 flat 和 [, ] 分別表示展平和連接操作。如表達式所示,Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds不需要任何特殊的 token 來分隔慢速和快速路徑。 SF-LLaVA 總共使用Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds個影片 token。影片的視覺特徵Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds將和文字訊息(例如使用者提出的問題)將被組合在一起,作為輸入資料送入大型語言模型(LLM)進行處理。

SlowFast 流程如公式 4 所示。

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

實驗結果

研究團隊對 SF-LLaVA 進行了全面的性能評估,將其與當前 SOTA 免訓練模型(如 IG-VLM 和 LLoVi)在多個視訊問答任務中進行了對比。此外,他們還將其與經過視訊資料集監督微調(SFT)的視訊 LLM,例如 VideoLLaVA 和 PLLaVA 進行了比較。

開放式視訊問答

如下表所示,在開放式視訊問答任務中,SF-LLaVA 在所有基準測試中都比現有的免訓練方法表現得更好。具體來說,分別搭載7B 和34B 參數規模的LLM 時,SF-LLaVA 分別在MSRVTT-QA 上比IGVLM 高出2.1% 和5.0%,在TGIF-QA 上高出5.7% 和1.5%,在ActivityNet -QA 上高出2.0% 和0.8%。

即使與經過微調的SFT 方法相比,SF-LLaVA 在大多數基準測試中也展現了可比的性能,只有在ActivityNet-QA 這一基准上,PLLaVA 和LLaVA-NeXT-VideoDPO 略勝一籌。

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

多項選擇視訊問答

從下表中可見,在所有基準測試中,SF-LLaVA 在多項選擇視訊問答的表現都優於其他免費訓練方法。在要求複雜長時序推理的 EgoSchema 資料集中,SF-LLaVA7B 和 34B 的版本相較 IG-VLM 模型的得分分別高出 11.4% 和 2.2%。

雖然 VideoTree 在基準測試中領先,因為它是基於 GPT-4 的專有模型,因而性能遠高於開源 LLM。與 SFT 方法相比,SF-LLaVA 34B 模型在 EgoSchema 上也取得了更好的結果,這證實了 SlowFast 設計處理長影片的強大能力。
Text Generation 

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

文生影片

如表 3 所示,對於文字產生影片的任務,SF-LLaVA 也顯示出了一些優勢。 SF-LLaVA-34B 在整體表現上超越了所有免訓練的基準。儘管在細節取向方面,SF-LLaVA 略遜於 LLaVA-NeXT-Image。基於 SlowFast 設計,SF-LLaVA 可以用更少的視覺 token 覆蓋更長的時間上下文,因此在時間理解任務中表現得格外出色。

此外,在文生影片的表現上,SF-LLaVA-34B 也優於大多數 SFT 方法。

Add fast and slow eyes to the video model, Apples new training-free method surpasses everything SOTA in seconds

更多細節,請參考原論文。

The above is the detailed content of Add fast and slow eyes to the video model, Apple's new training-free method surpasses everything SOTA in seconds. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Let's Dance: Structured Movement To Fine-Tune Our Human Neural NetsLet's Dance: Structured Movement To Fine-Tune Our Human Neural NetsApr 27, 2025 am 11:09 AM

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s

New Google Leak Reveals Subscription Changes For Gemini AINew Google Leak Reveals Subscription Changes For Gemini AIApr 27, 2025 am 11:08 AM

Google's Gemini Advanced: New Subscription Tiers on the Horizon Currently, accessing Gemini Advanced requires a $19.99/month Google One AI Premium plan. However, an Android Authority report hints at upcoming changes. Code within the latest Google P

How Data Analytics Acceleration Is Solving AI's Hidden BottleneckHow Data Analytics Acceleration Is Solving AI's Hidden BottleneckApr 27, 2025 am 11:07 AM

Despite the hype surrounding advanced AI capabilities, a significant challenge lurks within enterprise AI deployments: data processing bottlenecks. While CEOs celebrate AI advancements, engineers grapple with slow query times, overloaded pipelines, a

MarkItDown MCP Can Convert Any Document into Markdowns!MarkItDown MCP Can Convert Any Document into Markdowns!Apr 27, 2025 am 09:47 AM

Handling documents is no longer just about opening files in your AI projects, it’s about transforming chaos into clarity. Docs such as PDFs, PowerPoints, and Word flood our workflows in every shape and size. Retrieving structured

How to Use Google ADK for Building Agents? - Analytics VidhyaHow to Use Google ADK for Building Agents? - Analytics VidhyaApr 27, 2025 am 09:42 AM

Harness the power of Google's Agent Development Kit (ADK) to create intelligent agents with real-world capabilities! This tutorial guides you through building conversational agents using ADK, supporting various language models like Gemini and GPT. W

Use of SLM over LLM for Effective Problem Solving - Analytics VidhyaUse of SLM over LLM for Effective Problem Solving - Analytics VidhyaApr 27, 2025 am 09:27 AM

summary: Small Language Model (SLM) is designed for efficiency. They are better than the Large Language Model (LLM) in resource-deficient, real-time and privacy-sensitive environments. Best for focus-based tasks, especially where domain specificity, controllability, and interpretability are more important than general knowledge or creativity. SLMs are not a replacement for LLMs, but they are ideal when precision, speed and cost-effectiveness are critical. Technology helps us achieve more with fewer resources. It has always been a promoter, not a driver. From the steam engine era to the Internet bubble era, the power of technology lies in the extent to which it helps us solve problems. Artificial intelligence (AI) and more recently generative AI are no exception

How to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaHow to Use Google Gemini Models for Computer Vision Tasks? - Analytics VidhyaApr 27, 2025 am 09:26 AM

Harness the Power of Google Gemini for Computer Vision: A Comprehensive Guide Google Gemini, a leading AI chatbot, extends its capabilities beyond conversation to encompass powerful computer vision functionalities. This guide details how to utilize

Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Gemini 2.0 Flash vs o4-mini: Can Google Do Better Than OpenAI?Apr 27, 2025 am 09:20 AM

The AI landscape of 2025 is electrifying with the arrival of Google's Gemini 2.0 Flash and OpenAI's o4-mini. These cutting-edge models, launched weeks apart, boast comparable advanced features and impressive benchmark scores. This in-depth compariso

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment