Home  >  Article  >  Backend Development  >  python3+dlib implements face recognition and emotion analysis

python3+dlib implements face recognition and emotion analysis

不言
不言Original
2018-05-30 14:10:289273browse

This article explains in detail how python3 dlib implements face recognition and emotion analysis through specific codes and steps. Friends in need can refer to it.

1. Introduction

What I want to do is expression (emotion) analysis based on face recognition. I saw that there are many open source libraries available on the Internet, which provides great convenience for development. I chose the dlib library, which is currently used more frequently, for face recognition and feature calibration. Using python also shortens the development cycle.

The official website’s introduction to dlib is: Dlib contains a wide range of machine learning algorithms. All are designed to be highly modular, fast to execute, and extremely simple to use via a clean and modern C API. It is used in a variety of applications including robotics, embedded devices, mobile phones and large high-performance computing environments.

Although the applications are relatively high-end, it is still quite interesting to make a small sentiment analysis software on your PC.

Design the identification method according to your own ideas and ideas. Keras, which is also quite popular at the moment, seems to use changes in mouth shape as an indicator of emotional analysis.

My idea is to use the opening ratio of the mouth, the opening degree of the eyes, and the tilt angle of the eyebrows as three indicators of emotion analysis. However, due to the large differences in appearance between people and the wide range of facial features, my calculation method is also relatively simple. So the recognition efficiency is not very high.

Identification rules:

1. The greater the proportion of the distance between the mouth opening and the width of the facial recognition frame, the more excited the emotion is, which may be very happy, or it may be... Extremely angry.

2. The eyebrows are raised. The smaller the ratio between feature points 17-21 or 22-26 from the top of the facial recognition frame and the height of the recognition frame, it means the eyebrows are raised more strongly, which can express surprise and happiness. The tilt angle of the eyebrows. When you are happy, your eyebrows are usually raised. When you are angry, you frown, and at the same time, your eyebrows are pressed down more strongly.

3. Squint your eyes. People will unconsciously squint their eyes when they laugh heartily, and they will widen their eyes when they are angry or surprised.

System shortcomings: It cannot capture subtle changes in expressions, and can only roughly judge people's emotions, such as happiness, anger, surprise, and naturalness.

System advantages: simple structure and easy to use.

Application areas: smile capture, capture the beauty of the moment, alleviate children's autism, and develop interactive games.

Due to the complexity of human emotions, these expressions cannot completely represent a person's inner emotional fluctuations. To improve the accuracy of judgment, comprehensive evaluation such as heart rate detection and speech processing is required.

2. Development environment setup:

1. Install VS2015, because the latest version of dlib-19.10 requires this version of vscode

2. Install opencv (whl installation):

Download the required version of the whl file from pythonlibs, such as (opencv_python?3.3.0 contrib?cp36?cp36m?win_amd64.whl)
Then use pip locally install Installation. Pay attention to the file location (such as: C:\download\xxx.whl)

3. Install dlib (whl mode installation):

Download various versions of dlib here whl file, then open cmd in the root directory and install it directly.

But in order to learn to use various python example programs in dlib, you still need to download a dlib compressed package.

Visit dlib official website directly to download: http://dlib.net/ml.html

whl files of various versions of dlib: https://pypi.python. org/simple/dlib/

4. If you want to use face model feature calibration, you also need a face shape predictor. This can be trained with your own photos, or you can use the dlib author to A well-trained predictor:

Click to download: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2

3. Implementation ideas

Four. Specific steps

First use dlib for face recognition:)

import cv2
import dlib
from skimage import io

# 使用特征提取器get_frontal_face_detector
detector = dlib.get_frontal_face_detector()
# dlib的68点模型,使用作者训练好的特征预测器
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
# 图片所在路径
img = io.imread("2.jpg")
# 生成dlib的图像窗口
win = dlib.image_window()
win.clear_overlay()
win.set_image(img)

# 特征提取器的实例化
dets = detector(img, 1)
print("人脸数:", len(dets))

for k, d in enumerate(dets):
    print("第", k+1, "个人脸d的坐标:",
       "left:", d.left(),
       "right:", d.right(),
       "top:", d.top(),
       "bottom:", d.bottom())

    width = d.right() - d.left()
    heigth = d.bottom() - d.top()

    print('人脸面积为:',(width*heigth))

Then instantiate A shape_predictor object uses the dlib author to train the facial feature detector and perform facial feature point calibration.

When calibrating, use the circle method of opencv to add a watermark to the coordinates of the feature points. The content is the serial number and position of the feature points.

 # 利用预测器预测
    shape = predictor(img, d)
    # 标出68个点的位置
    for i in range(68):
      cv2.circle(img, (shape.part(i).x, shape.part(i).y), 4, (0, 255, 0), -1, 8)
      cv2.putText(img, str(i), (shape.part(i).x, shape.part(i).y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255))
    # 显示一下处理的图片,然后销毁窗口
    cv2.imshow('face', img)
    cv2.waitKey(0)

At this point, the information of 68 feature points has been obtained. Next, it is necessary to conduct a comprehensive calculation based on the coordinate information of these 68 feature points as a judgment indicator for each expression.

Based on my judgment indicators mentioned above, first calculate the opening ratio of the mouth. Due to the distance of the person from the camera, the size of the face recognition frame is different. , so the ratio is chosen as the judgment index.

Before selecting the standard value of the indicator, first analyze multiple photos of happy faces. Calculate the average mouth opening ratio when happy.

The following is a data processing method for intercepting human eyebrows. Linear fitting is performed on the five feature points on the left eyebrow, and a linear function straight line is fitted. The slope of the fitted straight line is used to approximately represent the inclination of the eyebrows. degree.

# 眉毛
          brow_sum = 0  # 高度之和
          frown_sum = 0  # 两边眉毛距离之和
          for j in range(17,21):
            brow_sum+= (shape.part(j).y - d.top()) + (shape.part(j+5).y- d.top())
            frown_sum+= shape.part(j+5).x - shape.part(j).x
            line_brow_x.append(shape.part(j).x)
            line_brow_y.append(shape.part(j).y)

          self.excel_brow_hight.append(round((brow_sum/10)/self.face_width,3))
          self.excel_brow_width.append(round((frown_sum/5)/self.face_width,3))
          brow_hight[0]+= (brow_sum/10)/self.face_width    # 眉毛高度占比
          brow_width[0]+= (frown_sum/5)/self.face_width    # 眉毛距离占比

          tempx = np.array(line_brow_x)
          tempy = np.array(line_brow_y)
          z1 = np.polyfit(tempx, tempy, 1) # 拟合成一次直线
          self.brow_k = -round(z1[0], 3)  # 拟合出曲线的斜率和实际眉毛的倾斜方向是相反的

我计算了25个人脸的开心表情的嘴巴张开比例、嘴巴宽度、眼睛张开程度、眉毛倾斜程度,导入excel表格生成折线图:

通过折线图能很明显的看出什么参数可以使用,什么参数的可信度不高,什么参数在那个范围内可以作为一个指标。

同样的方法,计算人愤怒、惊讶、自然时的数据折线图。

通过对多个不同表情数据的分析,得出每个指标的参考值,可以写出简单的表情分类标准:

# 分情况讨论
            # 张嘴,可能是开心或者惊讶
            if round(mouth_higth >= 0.03):
              if eye_hight >= 0.056:
                cv2.putText(im_rd, "amazing", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                      (0, 0, 255), 2, 4)
              else:
                cv2.putText(im_rd, "happy", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                      (0, 0, 255), 2, 4)

            # 没有张嘴,可能是正常和生气
            else:
              if self.brow_k <= -0.3:
                cv2.putText(im_rd, "angry", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                      (0, 0, 255), 2, 4)
              else:
                cv2.putText(im_rd, "nature", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                      (0, 0, 255), 2, 4)

五、实际运行效果:

识别之后:


The above is the detailed content of python3+dlib implements face recognition and emotion analysis. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Previous article:@classmethod in PythonNext article:@classmethod in Python