Home  >  Article  >  Technology peripherals  >  Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

王林
王林forward
2023-12-27 18:35:16761browse

Meta’s new video synthesis framework has brought us some surprises


Regarding today’s artificial intelligence development level For example, it is not difficult to transfer text-based pictures, pictures-based videos, and image/video style transfer.

Generative AI is gifted with the ability to effortlessly create or modify content. Image editing, in particular, has experienced significant growth, driven by text-to-image diffusion models pre-trained on billion-scale datasets. This wave has spawned a plethora of image editing and content creation apps.

Based on the achievements of image-based generative models, the next challenge area must be to add a "time dimension" to it, so as to achieve relaxed and creative videos edit.

A straightforward strategy is to use an image model to process the video frame by frame. However, generative image editing inherently has high variability—even based on the same textual prompts. Countless ways to edit a given image. If each frame is edited independently, it will be difficult to maintain temporal consistency.

In a recent paper, researchers from the Meta GenAI team proposed Fairy - a "simple adaptation" of the image editing diffusion model, which greatly enhances the The performance of AI in video editing.

The following is Fairy’s editing video effect display:

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

##Fairy generates a 120-frame 512×384 video (4 seconds, 30 FPS) in just 14 seconds, which is at least 44 times faster than previous methods. A comprehensive user study involving 1000 generated samples confirmed that the proposed method generates high quality and significantly outperforms existing methods.

How did you do it?

According to the paper, Fairy is based on the concept of cross-frame attention based on anchor points. This mechanism can implicitly propagate diffusion features across frames and ensure temporal consistency. and high-fidelity composite effects. Fairy not only solves the limitations of previous models in terms of memory and processing speed, but also improves temporal consistency through a unique data augmentation strategy that makes the model equivalent to an affine transformation of the source and target images.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

  • Paper address: https://arxiv.org/pdf/2312.13834.pdf
  • Project homepage: https://fairy-video2video.github.io/

Method

Fairy revisits the previous tracking-and-propagation paradigm in the context of diffusion model characteristics. In particular, this study uses correspondence estimation to bridge cross-frame attention, allowing the model to track and propagate intermediate features within the diffusion model.

The cross-frame attention map can be interpreted as a similarity measure to evaluate the correspondence between tokens in each frame. The features of a semantic region will Higher attention is allocated to similar semantic regions in other frames, as shown in Figure 3 below.

Therefore, the current feature representation is refined and propagated by focusing on the weighted sum of similar regions between frames, thereby effectively minimizing feature differences between frames.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

A series of operations produce an anchor-based model, which is the core component of Fairy.

In order to ensure the temporal consistency of the generated videos, this study sampled K anchor frames to extract diffusion features, and the extracted features were defined as a set to be propagated to global features of consecutive frames. This study replaces the self-attention layer with cross-frame attention for the cached features of the anchor frame when each new frame is generated. Through cross-frame attention, tokens in each frame adopt features that exhibit similar semantic content in the anchor frame, thereby enhancing consistency.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Experimental evaluation

In the experimental part, the researchers mainly based on command-based image editing model to implement Fairy and replace the model's self-attention with cross-frame attention. They set the number of anchor frames to 3. The model can accept inputs of different aspect ratios and rescale the longer size input resolution to 512, keeping the aspect ratio unchanged. The researchers edit all frames of the input video without downsampling. All calculations are distributed across 8 A100 GPUs.

Qualitative evaluation

The researcher first showed Fairy’s qualitative results, as shown below As shown in 5, Fairy can edit different themes.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

In Figure 6 below, the researcher shows that Fairy can perform different types of editing according to text instructions, including stylization, role Changes, partial editing, attribute editing, etc.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Figure 9 below shows that Fairy can convert source characters into different target characters according to instructions.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Quantitative evaluation

The researcher is below The overall quality comparison results are shown in Figure 7, where the videos generated by Fairy are more popular.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

Figure 10 below shows the visual comparison results with the baseline model.

Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.

For more technical details and experimental results, please refer to the original paper.

The above is the detailed content of Video can be reconstructed in 14 seconds and characters can be changed. Meta speeds up video synthesis by 44 times.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete