Home  >  Article  >  Backend Development  >  This is the first time to play DeepFakes

This is the first time to play DeepFakes

coldplay.xixi
coldplay.xixiforward
2020-11-13 16:55:244214browse

python video tutorial column introduces DeepFakes. This is the first time to play DeepFakes

Target

I have never touched DeepFakes before, and suddenly I want to post a video of Bilibili to play with it. It’s quite troublesome to try it out, so I’ll record the pitfalls I encountered.

The goal of this article is to replace the video of The Singing Trump with our Comrade Chuan Jianguo.

Final effect:

Video link: https://www.bilibili.com/video/BV12p4y1k7E8/

Environment Description

The environment attempted in this article is a Linux server environment, because it runs faster.

Python environment: Anoconda python3.7 version

GPU: K80, 12G video memory

DeepFake version: 2.0

Other tools: ffmpeg

Material preparation

First of all, you need to prepare one or more videos of The Singing Trump, as well as the video of Comrade Chuan Jianguo. Used as face-changing material.

Video Segmentation

First split the video material into multiple pictures through ffmpeg.

mkdir output
ffmpeg -i 你的视频.mp4 -r 2 output/video-frame-t-%d.png复制代码

The video here does not have to be mp4, other formats are also acceptable, and -r 2 means 2 frames, that is, two pictures are collected per second. You can follow your own video try. Finally, it is output to the output folder. The prefix can be defined casually, and the name is not critical.

It is best to find more videos here, because deepfake will prompt you to ensure that the number of faces is greater than 200. I have prepared 3 videos here, a total of 6 videos.

ffmpeg -i sing_trump1.mp4 -r 2 sing_trump_output/st1-%d.png
ffmpeg -i sing_trump2.flv -r 2 sing_trump_output/st2-%d.png
ffmpeg -i sing_trump3.mp4 -r 2 sing_trump_output/st3-%d.png复制代码
ffmpeg -i trump1.webm -r 2 trump_output/t1-%d.png
ffmpeg -i trump2.mp4 -r 2 trump_output/t2-%d.png
ffmpeg -i trump3.mp4 -r 2 trump_output/t3-%d.png复制代码

It’s quite big when it’s done, the mess adds up to 3.7 G.

Clone code to install dependencies

There is nothing to say here, download the code from github.

git clone https://github.com/deepfakes/faceswap.git复制代码

Then install the environment according to your actual situation. Here, I install cpu on the PC, and then install nvidia on the server.

Extract faces

Next, extract all the faces.

python3 faceswap.py extract -i trump_output -o trump_output_face
python3 faceswap.py extract -i sing_trump_output -o sing_trump_output_face复制代码

That’s it after being slapped in the face.

Filter faces

Next we need tomanuallydelete all the faces we don’t need.

Modify alignment

When we call extract to generate a face, a proofreading file will be automatically generated for Save the face information on the original image. After deleting the face, you need to align the face with the original picture.

Here you can open the gui tool

python3 faceswap.py gui复制代码

and then select Alignments under Tools.

Next select Remove-Faces, and then enter the alignment file path, the path of the face, and the path of the original image.

Then click the green button to start and run.

Then sing_trump_out will also perform the same operation.

Start training

You can start training next. The -m parameter is the location where the model is saved.

python3 ./faceswap.py train -A sing_trump_output_face -ala sing_trump_output/alignments.fsa -B trump_output_face -alb trump_output/alignments.fsa  -m model复制代码

小问题

这里如果用gpu的话,我发现tensorflow2.2开始要用cuda10.1以上,但我这边儿没法装,所以需要用tensorflow1.14或者tensorflow1.15,这就需要deepfake的1.0版本才能用。

github.com/deepfakes/f…

训练截图

我发现faceswap1.0和master分支的操作是一样的,没太大变化。

我这里的速度大概是2分钟100个step。

转换视频

准备视频帧

首先要准备我们要转换的视频,然后把视频切分,这里就不是按照之前的帧数了。

ffmpeg –i sing_trump2.flv input_frames/video-frame-%d.png 
复制代码

这里我的视频是1分41秒。

转换完了大概有3050张图片,也就是差不多30帧的,然后一共7.1G(mac就256G真的有点儿遭不住)

再次对齐一遍

接下来,需要对我们要转换的视频图片再来一遍人脸对齐,首先抽脸。

python3 faceswap.py extract -i input_frames -o input_frames_face复制代码

然后再把多余的脸删掉,像前面的步骤一样的操作用gui工具选择Remove-Faces,然后进行对齐。

对每一帧进行AI换脸

通过convert命令进行转换

python3 faceswap.py convert -i input_frames/ -o output_frames -m model/复制代码

我这里的速度大概是每秒1张图片,不过真正的脸只有600多张,如果脸比较密集的话我估计可能没有那么快,所有的图片转换完大概是5分多钟(这个gpu当时有别的程序在跑真实可能会更快一点儿)。

效果

训练20分钟后

在训练了1200step之后,大概是这个样子,效果看着还不是很好哈,不过已经有点儿意思了。

训练一个小时后

训练一天以后

把图片合成视频

最后通过ffmpeg把图片合成一个视频。

ffmpeg -i output_frames/video-frame-%d.png -vcodec libx264 -r 30  out.mp4复制代码

这里合并完了我发现是2分钟,不过影响也不大,毕竟后面还要进行剪辑,用PR等软件再编辑一下就好了。

总结

看视频可以发现当脸比较小的时候,faceswap并没有识别出来脸,所以也就没有做替换,还是有点儿遗憾。

个人感觉整个deepfake的最费时间的流程其实就是在删掉多余的脸上面。

相关免费学习推荐:python视频教程

The above is the detailed content of This is the first time to play DeepFakes. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:juejin.im. If there is any infringement, please contact admin@php.cn delete