Home  >  Article  >  Technology peripherals  >  Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

WBOY
WBOYOriginal
2024-08-11 16:02:31378browse
Since the release of Sora, the field of AI video generation has become more "busy". In the past few months, we have witnessed Jimeng, Runway Gen-3, Luma AI, and Kuaishou Keling taking turns to explode.

Unlike the past models that can be identified as AI-generated at a glance, this batch of large video models may be the "best" we have ever seen.

However, behind the amazing performance of the video large language model (LLM) is a huge and finely annotated video data set, which requires a very high cost. Recently, a number of innovative methods have emerged in the research field that do not require additional training: using trained image large language models to directly process video tasks, thus bypassing the "expensive" training process.

In addition, most existing video LLMs suffer from two major disadvantages: (1) they can only handle video input with a limited number of frames, which makes it difficult for the model to capture the subtle spatial and temporal content in the video; (2) they It lacks temporal modeling design, but simply inputs video features into LLM, completely relying on LLM's ability to model motion.

In response to the above problems, Apple researchers proposed SlowFast-LLaVA (SF-LLaVA for short). This model is based on the LLaVA-NeXT architecture developed by the Byte team. It requires no additional fine-tuning and can be used out of the box. Inspired by the successful two-stream network in the field of action recognition, the research team designed a novel SlowFast input mechanism for video LLM.

Simply put, SF-LLaVA will understand the details and motion in the video through two different observation speeds (Slow and Fast).

  • Slow path: extract features at low frame rates while retaining as much spatial detail as possible (e.g. retain 24×24 tokens every 8 frames)
  • Fast path: run at high frame rates, but Use a larger spatial pooling step size to reduce the resolution of the video to simulate a larger temporal context and focus more on understanding the coherence of actions

This is equivalent to the model having two "eyes": one Just look slowly and pay attention to the details; the other one is to look quickly and pay attention to the movements. This solves the pain points of most existing video LLMs and can capture both detailed spatial semantics and longer temporal context.

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

Paper link: https://arxiv.org/pdf/2407.15841

Experimental results show that SF-LLaVA surpasses existing training-free methods by significant advantages in all benchmark tests. Compared with carefully fine-tuned SFT models, SF-LLaVA achieves the same performance or even better.

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

Model architecture

As shown in the figure below, SF-LLaVA follows the standard training-free video LLM process. It takes a video V and a question Q as input and outputs the corresponding answer A.

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

對於輸入,要從每個視訊任意大小和長度中均勻取樣 N 幀,I = {I_1, I_2, ..., I_N},不需要對選取的視訊幀進行特別的組合或排列。以幀為單位視獨立提取頻特徵為 F_v ∈ R^N×H×W,其中 H 和 W 分別為幀特徵的高度和寬度。

下一步需要在慢速和快速兩個路徑中進一步處理 F_v,並將它們結合起來作為有效的視頻表示。慢速路徑從 F_v 中均勻取樣Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds的幀特徵,其中Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

先前有研究發現,在空間維度上適當池化可以提高影片產生的效率和穩健性。因此,研究團隊在 F_v 上應用步長為 σ_h×σ_w 的池化過程,得到最終特徵:Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds,其中Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in secondsAdd fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds。慢速路徑的整個過程如公式 2 所示。

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

快速路徑保留 F_v 中的所有幀特徵,以盡可能多地捕捉視訊的長程時間上下文。具體來說,研究團隊使用空間池化步長Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds對 F_v 進行激進的下取樣,得到最終特徵Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds。研究團隊設定Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in secondsAdd fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds,讓快速路徑能專注於模擬時間脈絡和運動線索。慢速路徑的整個過程如公式 3 所示。

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

最後,得到聚合的視訊特徵:Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds,其中 flat 和 [, ] 分別表示展平和連接操作。如表達式所示,Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds不需要任何特殊的 token 來分隔慢速和快速路徑。 SF-LLaVA 總共使用Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds個影片 token。影片的視覺特徵Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds將和文字訊息(例如使用者提出的問題)將被組合在一起,作為輸入資料送入大型語言模型(LLM)進行處理。

SlowFast 流程如公式 4 所示。

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

實驗結果

研究團隊對 SF-LLaVA 進行了全面的性能評估,將其與當前 SOTA 免訓練模型(如 IG-VLM 和 LLoVi)在多個視訊問答任務中進行了對比。此外,他們還將其與經過視訊資料集監督微調(SFT)的視訊 LLM,例如 VideoLLaVA 和 PLLaVA 進行了比較。

開放式視訊問答

如下表所示,在開放式視訊問答任務中,SF-LLaVA 在所有基準測試中都比現有的免訓練方法表現得更好。具體來說,分別搭載7B 和34B 參數規模的LLM 時,SF-LLaVA 分別在MSRVTT-QA 上比IGVLM 高出2.1% 和5.0%,在TGIF-QA 上高出5.7% 和1.5%,在ActivityNet -QA 上高出2.0% 和0.8%。

即使與經過微調的SFT 方法相比,SF-LLaVA 在大多數基準測試中也展現了可比的性能,只有在ActivityNet-QA 這一基准上,PLLaVA 和LLaVA-NeXT-VideoDPO 略勝一籌。

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

多項選擇視訊問答

從下表中可見,在所有基準測試中,SF-LLaVA 在多項選擇視訊問答的表現都優於其他免費訓練方法。在要求複雜長時序推理的 EgoSchema 資料集中,SF-LLaVA7B 和 34B 的版本相較 IG-VLM 模型的得分分別高出 11.4% 和 2.2%。

雖然 VideoTree 在基準測試中領先,因為它是基於 GPT-4 的專有模型,因而性能遠高於開源 LLM。與 SFT 方法相比,SF-LLaVA 34B 模型在 EgoSchema 上也取得了更好的結果,這證實了 SlowFast 設計處理長影片的強大能力。
Text Generation 

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

文生影片

如表 3 所示,對於文字產生影片的任務,SF-LLaVA 也顯示出了一些優勢。 SF-LLaVA-34B 在整體表現上超越了所有免訓練的基準。儘管在細節取向方面,SF-LLaVA 略遜於 LLaVA-NeXT-Image。基於 SlowFast 設計,SF-LLaVA 可以用更少的視覺 token 覆蓋更長的時間上下文,因此在時間理解任務中表現得格外出色。

此外,在文生影片的表現上,SF-LLaVA-34B 也優於大多數 SFT 方法。

Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds

更多細節,請參考原論文。

The above is the detailed content of Add fast and slow eyes to the video model, Apple’s new training-free method surpasses everything SOTA in seconds. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn