Home > Article > Technology peripherals > Everyone has a 'little movie'! The video version of Midjourney is available for free, and a cool blockbuster is created in just one sentence, which stuns netizens
After three months of anticipation, Runway’s Gen-2 is finally available for free trial!
It can be said that this is a day worth recording in the history of the development of AI video tools.
Trial address: https://app.runwayml.com
This AI tool can quickly generate a 4-second video using text and pictures. In other words, the way to generate videos is completely "making something out of nothing."
After reading the demo below, I exclaimed: Are the director and actors really going to be eliminated with one click?
Excited netizens ran to test it one after another. Their thoughts were: The effect is so explosive. Is this AI crazy?
The only regret is that the currently generated video is only 4 seconds long. (But if the scenes before and after are not abrupt, splicing a few more 4-second videos is actually enough) The editor who loves blockbuster films really can’t hold himself back. (So can the scene in my dream come true?)
Editor’s actual test: Everyone can make a short movie!
After entering the page, try a simple prompt: "The scene of humans landing on Mars for the first time and building a base."
The content of the screen generated by Gen-2 is consistent with the description of the prompt word, but the elements in the screen are too simple and artistic. The feeling is also unsatisfactory.
So the editor added some content related to the environment style: "Humans landed on Mars for the first time and built a Mars base. Movie-style images, futuristic style."
Sure enough, The content and effects generated by Gen-2 this time are indeed richer than before.
However, there is still a problem with this video: the images and characters are almost static, with very few changes.
#After repeated testing, we found some essential content for prompt words: style, lens, content, action, environment and light.
As long as you add these contents to the prompt words, the generated animation will immediately be significantly improved.
Regarding the theme of just landing on Mars, the editor tried to rewrite a prompt:
"Movie style, science fiction style, two An astronaut is building a working tower in a Mars base, very strong sunlight."
Compared with the previous two, the animation now generated has obviously been greatly improved in terms of picture style, content richness, action effects, and light and shadow effects.
Finally, the editor uploaded a picture based on the same prompt words and asked Gen-2 to use the prompt words and pictures. An animation is generated.
It can be seen that compared with the third video with the same prompt word but no picture, the fourth video with picture prompt is indeed It has a considerable relationship with the content of the picture.
With the precise description of prompt words and pictures, basically anyone can generate an ideal paragraph in an instant video content.
Compared with last time, Runway’s slogan this time goes further: “As long as you can imagine it, it will It will be generated for you.” (The slogan last time was "say it, see it")
Is this okay? The difference between ordinary people and great directors, apart from professional knowledge and experience, is the professional photography team and actors.
The latter gap is now completely filled by AI tools like Gen-2. No cameras, no video cameras, no 3D modeling, no Cinema 4D...
As long as you have a strong imagination and learn the skilled "spell", "everyone can make their own movie" is no longer a dream!
Whether it is a realistic picture——
or something more abstract——
#You can even generate animations, perform various operations on them, and then insert them into your own videos.
The threshold for video generation has been greatly lowered, which is really good news for the majority of content creators.
It can be said that the AI of Wensheng Video/Tusheng Video has really brought too much disruptive change to the industry.
In the past, if Scissorhands wanted to find a suitable material (such as some colors splashing in space), they had to search on material websites such as stock footage. The scope is limited and available to everyone, the material is not exclusive.
Or, grab some paint and throw it around in the studio and capture it yourself with a camera. It's time-consuming and labor-intensive, and it also makes the surrounding environment messy.
You can also do it yourself in Blender, which usually takes hours or even days.
##Or, you need to hire a professional team to film this process. The cost probably ranges from a few hundred to a few thousand dollars.
Now, Gen-2 has reduced all these costs!
Gen-1 is popular againIn fact, the previous Gen-1 was already explosive enough.
With the release of Gen-2, the video below has become popular again recently.
I saw that the uncle turned into a nobleman with a snap of his fingers and traveled to the European courts in the 17th and 18th centuries.
Soon, with a snap of the fingers, he transformed into the protagonist in "Planet Rise" again, traveling across the ruined battlefield.
This is nothing, the gender switch that follows is a masterpiece!
Who would have thought that such a fit woman was actually the "old man" just now.
It must be said that under the guidance of netizens, the backgrounds, faces and clothing generated by Gen-2 are not only It's quite natural and seems to have a relatively strong consistency.
But the hands, which are the most difficult to deal with, still often have bugs.
One MV, 30 dollarsAlready some Twitter users with quick hands have tested Gen -2The ability to generate MV.
Practice has proven that we need a fairly long prompt to produce high-quality and controllable results. However, it can be fine-tuned by adding or changing a word.
In addition, there is another important influencing factor-locking seeds.
The rest is just a lot of trying.
For example, this Steve Mills generated a total of about 500 seconds of video and finally edited it into a 140-second MV.
Please enjoy next: "Blue City Streets".
It is worth noting that this video was actually produced in the beta version.
According to the author's estimation, if the price of the public version is used, the entire production will cost at least 30 US dollars. Coupled with the previous learning and exploration stage, the price will be even higher.
Teach you step by step how to use itAfter seeing so many examples, how do you become a Hollywood director? Let's take it step by step.
First, register an account.
After entering the homepage, select Gen-2: Text to Video.
Then, the Prompt box appears. This is where you need to show off your skills!
After Prompt is completed, you can also open the options settings card to upgrade your results, such as selecting Insert Frame to Video transitions are smoother, resolution is improved, watermarks are removed, and more.
Then, click Generate, and the next step is to witness the miracle!
But many friends have to say that they died too early and simply wrote prompts that were too complex and too gorgeous. Not coming out.
Don’t worry, the Gen-2 launched this time gave us a big surprise - even for simple prompts, the video produced by Gen-2 is not bad.
For example, if you simply enter "a tree", the generated video will look like this——
If you enter "a tree on a grassy mountain in the American Midwest, professional film style, shallow depth of field, subject focus, beautiful lighting, smooth dynamic motion", the generated video will look like this ——
That is to say, we ordinary users can also quickly generate cool videos without becoming a prompt master. Got it!
In response to this, netizens at Station B have opened up their imaginations.
##Some people have also started testing it.
##
Foreign netizens also exclaimed when they saw Gen-2
I finally waited for you, but fortunately I didn’t give up.
#As generative AI continues to produce video content, the entertainment industry is about to change.
Decades ago, the quality of movies back then was vastly different from what it is now.
Now, with the blessing of AI tools such as Gen-2, perhaps the boundaries of movies in the future are only the boundaries of human imagination, not the boundaries of technology.
The above is the detailed content of Everyone has a 'little movie'! The video version of Midjourney is available for free, and a cool blockbuster is created in just one sentence, which stuns netizens. For more information, please follow other related articles on the PHP Chinese website!