Home  >  Article  >  Backend Development  >  An explanation of the implementation method of Python+OpenCV image style migration

An explanation of the implementation method of Python+OpenCV image style migration

不言
不言forward
2018-10-16 14:21:405086browse

This article brings you an explanation of the implementation method of Python OpenCV image style migration. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.

Many people now like to take photos (selfies). You will get tired of playing with the limited filters and decorations too much, so there are apps that provide imitate famous painting styles functions, such as prisma, versa, etc., which can turn your photos into Van Gogh, The styles of masters such as Picasso and Munch.

An explanation of the implementation method of Python+OpenCV image style migration

This function is called "Image Style Transfer", which is almost all based on the CVPR 2015 paper "A It was developed based on the algorithms proposed in Neural Algorithm of Artistic Style and the ECCV 2016 paper "Perceptual Losses for Real-Time Style Transfer and Super-Resolution", as well as subsequent related research.

In layman's terms, it is to use neural network to pre-train the styles in famous paintings into models, and then apply them to different photos to generate new stylized images.

An explanation of the implementation method of Python+OpenCV image style migration

From "A Neural Algorithm of Artistic Style"

And because neural networks are increasingly used in computer vision , the famous visual development library OpenCV officially introduced DNN (Deep Neural Network) in version 3.3, supporting models of mainstream frameworks such as Caffe, TensorFlow, Torch/PyTorch, etc., which can be used to realize image recognition, detection, and classification , segmentation, coloring and other functions.
I just recently discovered that there is a Python example of image style transfer in OpenCV's Sample code (forgive my hindsight), which is based on the network model implementation in the ECCV 2016 paper. Therefore, even as a novice in artificial intelligence, you can play with models trained by others and experience the wonders of neural networks.

(See the end of the article for relevant codes and models)

OpenCV official code address: https://github.com/opencv/opencv/blob/3.4.0/samples/dnn/fast_neural_style Run the code by executing the command in the .py

directory:

python fast_neural_style.py --model starry_night.t7

model The parameter is to provide the path to the pre-trained model file. OpenCV does not provide downloading, but the You can find it in the reference project https://github.com/jcjohnson/fast-neural-style

Other settable parameters are:

  • input You can specify the original image/video. If not provided, the camera will be used to capture it in real time by default.

  • width, height, adjust the size of the processed image, setting it smaller can improve the calculation speed. On my own computer, 300x200 converted video can reach 15 fps.

  • median_filter The window size of the median filter is used to smooth the result image. This has little impact on the result.

The effect after execution (taken from jcjohnson/fast-neural-style):

An explanation of the implementation method of Python+OpenCV image style migration

Original Image

An explanation of the implementation method of Python+OpenCV image style migration

ECCV16 models

An explanation of the implementation method of Python+OpenCV image style migration

##instance_norm models

The core code is actually very short, that is

Load the model-> Read the image-> Calculate-> Output the image, I further simplified it based on the official example:

import cv2
# 加载模型
net = cv2.dnn.readNetFromTorch('the_scream.t7')
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV);
# 读取图片
image = cv2.imread('test.jpg')
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(image, 1.0, (w, h), (103.939, 116.779, 123.680), swapRB=False, crop=False)
# 进行计算
net.setInput(blob)
out = net.forward()
out = out.reshape(3, out.shape[2], out.shape[3])
out[0] += 103.939
out[1] += 116.779
out[2] += 123.68
out /= 255
out = out.transpose(1, 2, 0)
# 输出图片
cv2.imshow('Styled image', out)
cv2.waitKey(0)
In addition, a version with real-time comparison of multiple effects has been modified (the calculation amount is large and it is very laggy), and it has also been uploaded in the code.

An explanation of the implementation method of Python+OpenCV image style migration

PS: When I watched Zhao Lei’s concert two days ago, I also said: There are a lot of background MVs for his concerts The use of image binarization, edge detection and other operations reminds me of the big assignments in digital image processing classes in the past... Now that the efficiency of image style transfer has reached real-time, I believe it will be used frequently in the future.

The above is the detailed content of An explanation of the implementation method of Python+OpenCV image style migration. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:segmentfault.com. If there is any infringement, please contact admin@php.cn delete