Home >Technology peripherals >AI >Stable AI releases stable video diffusion model to generate videos based on images
IT House News on November 22, Stability AI, a startup company focused on developing artificial intelligence (AI) products, released its latest AI model - Stable Video Diffusion. This model can generate videos from existing images and is an extension of the previously released Stable Diffusion text-to-image model. It is also one of the few AI models on the market that can generate videos so far.
However, this model is not currently open to everyone. Stable Video Diffusion is currently in what Stability AI calls a “research preview” stage. Those who want to use this model must agree to some terms of use, which specify the intended use cases of Stable Video Diffusion (such as "educational or creative tools", "design and other artistic processes", etc.) and unintended use cases (such as " a factual or true expression of a person or event").
Stable video diffusion actually consists of two models - SVD and SVD-XT. SVD can convert static pictures into 14-frame 576×1024 pixel videos. SVD-XT uses the same structure, but the number of frames is increased to 24. Both can produce video at 3 to 30 frames per second
According to a white paper released by Stability AI along with Stable Video Diffusion, SVD and SVD-XT were initially trained on a dataset containing millions of videos and then on a smaller dataset Without "fine-tuning", this data set only has a few hundred thousand to about a million video clips. The source of these videos is not entirely clear, and the white paper suggests that many are from publicly available research data sets, so it is impossible to tell whether there are any copyright issues.
Both SVD and SVD-XT are capable of generating high-quality four-second videos, and judging from carefully selected samples on the Stability AI blog, the quality is comparable to the latest video generation models from Meta, Google, AI startup Runway, and Pika Labs AI-generated videos comparable to
IT House noticed that there are also limitations to the proliferation of stable videos. Stability AI is also very candid about this, saying the models cannot generate videos without motion or slow camera pans, cannot be controlled with text, cannot render text (at least not clearly), and cannot do it consistently "correctly" Generate faces and characters
Although in its early stages, Stability AI notes that these models are highly scalable and can be adapted to a variety of use cases such as generating 360-degree views of objects
Stability AI’s ultimate goal appears to be to commercialize it, saying Stable Video Diffusion has potential applications in “advertising, education, entertainment and other fields.”
The above is the detailed content of Stable AI releases stable video diffusion model to generate videos based on images. For more information, please follow other related articles on the PHP Chinese website!