Home  >  Article  >  Technology peripherals  >  Stable Video Diffusion is here, code weight is online

Stable Video Diffusion is here, code weight is online

PHPz
PHPzforward
2023-11-22 14:30:481177browse

Stability AI, a well-known company for AI drawing, has finally entered the AI-generated video industry.

This Tuesday, Stable Video Diffusion, a video generation model based on stable diffusion, was launched, and the AI ​​community immediately started a discussion

Stable Video Diffusion来了,代码权重已上线

Many people said "We have finally waited."

Stable Video Diffusion来了,代码权重已上线

Project link: https://github.com/Stability-AI/generative-models

Now, you can use existing static images to generate a few seconds of video

Based on Stability AI’s original Stable Diffusion graph model, Stable Video Diffusion has become open source or commercial One of the few video generative models in the industry.

Stable Video Diffusion来了,代码权重已上线

Stable Video Diffusion来了,代码权重已上线

But it is not available to everyone yet, Stable Video Diffusion has opened user waiting list registration (https://stability.ai/contact).

According to the introduction, Stable Video Propagation can be easily adapted to a variety of downstream tasks, including multi-view synthesis from a single image by fine-tuning multi-view datasets. Stable AI says it is planning various models that build and expand this foundation, similar to the ecosystem built around Stable Proliferation

Stable Video Diffusion来了,代码权重已上线

Stable Video Diffusion来了,代码权重已上线

With stable video transmission, 14 and 25 frames of video can be generated at customizable frame rates from 3 to 30 frames per second

In external evaluation, Stability AI confirms that these models outperform leading closed-source models in user preference research:

Stable Video Diffusion来了,代码权重已上线

Stability AI highlights, Stable Video Diffusion is not suitable for real-world or direct commercial applications at this stage, and the model will be refined based on user insights and feedback on safety and quality.

Stable Video Diffusion来了,代码权重已上线

Paper address: https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models- to-large-datasets

Stable video transmission is a member of the stable AI open source model family. It now appears that their products cover multiple modalities such as images, language, audio, 3D and code, which fully demonstrates their commitment to improving artificial intelligence

Stable Video Diffusion The technical level

The stable video diffusion model, as a potential diffusion model for high-resolution videos, has reached the SOTA level of text-to-video or image-to-video. Recently, latent diffusion models trained on 2D image synthesis were turned into generative video models by inserting temporal layers and fine-tuning them on small high-quality video datasets. However, training methods vary widely in the literature, and the field has yet to agree on a unified strategy for video data curation

In the paper Stable Video Diffusion, Stability AI identifies and evaluates successful training videos Three distinct stages of the latent diffusion model: text-to-image pre-training, video pre-training, and high-quality video fine-tuning. They also demonstrate the importance of carefully prepared pre-training datasets for generating high-quality videos and describe a systematic curation process to train a strong base model, including subtitles and filtering strategies.

Stability AI also explores in the paper the impact of fine-tuning the base model on high-quality data and trains a text-to-video model that is comparable to closed-source video generation. The model provides powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. In addition, the model can also provide a powerful multi-view 3D prior, which can be used as the basis of a multi-view diffusion model. The model generates multiple views of an object in a feed-forward manner, requiring only small computing power requirements , the performance is also better than image-based methods.

Stable Video Diffusion来了,代码权重已上线

Specifically, training the model successfully requires the following three stages:

Phase 1: Image pre-training. This article regards image pre-training as the first stage of the training pipeline, and builds the initial model on Stable Diffusion 2.1, thus equipping the video model with a powerful visual representation. In order to analyze the effect of image pre-training, this article also trains and compares two identical video models. Figure 3a results show that the image pre-trained model is preferred in terms of both quality and cue tracking.

Stable Video Diffusion来了,代码权重已上线

Phase 2: Video pre-training data set. This article relies on human preferences as signals to create suitable pre-training datasets. The data set created in this article is LVD (Large Video Dataset), which consists of 580M pairs of annotated video clips.

Further investigation revealed that the generated dataset contained some examples that may degrade the performance of the final video model. Therefore, in this paper we use dense optical flow to annotate the data set

Stable Video Diffusion来了,代码权重已上线

In addition, this paper also applies optical character recognition to remove a large number of Text clipping. Finally, we use CLIP embeddings to annotate the first, middle, and last frames of each clip. The following table provides some statistics for the LVD dataset:

Stable Video Diffusion来了,代码权重已上线

Phase 3: High-quality fine-tuning. To analyze the impact of video pre-training on the final stage, this paper fine-tunes three models that differ only in initialization. Figure 4e shows the results.

Stable Video Diffusion来了,代码权重已上线

Looks like this is a good start. When will we be able to use AI to directly generate a movie?

The above is the detailed content of Stable Video Diffusion is here, code weight is online. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete