Home > Article > Technology peripherals > Stable Video Diffusion is here! 3D synthesis function attracts attention, netizens: progress is too fast
Stable Video Diffusion officially started to process videos -
Released the generative video modelStable Video Diffusion (SVD).
Stability AI official blog shows that the new SVD supports text-to-video and image-to-video generation:
and also Supports the transformation of objects from a single perspective to multiple perspectives, that is, 3D synthesis:
According to external evaluation, the official claims that SVD is even better than runway and Pika. Video generation AI is more popular among users.
Although only the basic model has been released so far, the official revealed that "it is planning to continue to expand and establish an ecosystem similar to stable diffusion"
The paper code weight is now online.
Recently, new methods of play have been emerging in the field of video generation. Now it is the turn of Stable Diffusion to appear, so that netizens have lamented "fast", such progress is too fast. !
But judging from the demo effect alone, more netizens said they were not very surprised.
Although I like SD, and these demos are great...but there are also some flaws, the lighting and shadow are wrong, and the overall incoherence(video flickers between frames).
All in all, this is the beginning. Netizens are very optimistic about SVD’s 3D synthesis function:
I can guarantee that there will be more soon. When good things come out, you only need a brief description to present a complete 3D scene
In addition to what is shown above Yes, the official has also released more demonstrations, let’s take a look first:
Space walks are also arranged:
You can also keep the background still and only let the two birds move:
The research paper on SVD has also been released. According to reports, SVD is based on Stable Diffusion 2.1 and uses about The base model is pre-trained on a video data set of 600 million samples.
Easily adaptable to a variety of downstream tasks, including multi-view synthesis from a single image by fine-tuning multi-view datasets.
After fine-tuning, two image-to-video models were officially announced. These models can generate 14-frame (SVD) and 25-frame (SVD-XT) video at custom frame rates from 3 to 30 frames per second depending on the user's needs
#After fine-tuning the multi-view video generation model, we named it SVD-MV
According to the test results, on the GSO dataset, SVD-MV scored excellent For multi-view generation models Zero123, Zero123XL, SyncDreamer:
It is worth mentioning that Stability AI stated that SVD is currently limited to research and is not suitable for practical or commercial applications. SVD is not currently available to everyone, but user waiting list registration is open.
Recently, there has been a state of "melee" in the field of video generation
There was previously Vincent Video AI developed by PikaLabs:
Later, the so-called "most powerful video generation AI in historyMoonvalley was launched:
Recently, Gen-2's "Motion Brush" function has also been officially launched. You can draw where you want:
Now SVD has appeared again , and there is the possibility of 3D video generation.
However, there seems to be not much progress in text to 3D generation, and netizens are also very confused about this phenomenon.
Some people think that data is the bottleneck that hinders development:
Some netizens think that the problem is that the ability of reinforcement learning is not strong enough
Do you know the latest progress in this area? Welcome to share in the comment area~
Paper link: https://static1.squarespace.com/static/6213c340453c3f502425776e /t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf What needs to be rewritten is:
The above is the detailed content of Stable Video Diffusion is here! 3D synthesis function attracts attention, netizens: progress is too fast. For more information, please follow other related articles on the PHP Chinese website!