Home  >  Article  >  Backend Development  >  How to use Python to implement face recognition function?

How to use Python to implement face recognition function?

WBOY
WBOYforward
2023-04-20 22:16:143051browse

1. Face detection

Face detection refers to detecting the location of a face from an image or video. We use the OpenCV library to implement the face detection function. OpenCV is a popular computer vision library that supports a variety of image and video processing functions and runs on multiple platforms.

The following is a code example of face detection in Python:

import cv2

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
img = cv2.imread('test.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=5)

for (x,y,w,h) in faces:
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)

cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

In this code example, we use OpenCV's CascadeClassifier class to load a classifier named "haarcascade_frontalface_default.xml" , this classifier comes with OpenCV and is used for face detection. We then read an image called "test.jpg" and convert it to a grayscale image. Next, we use the detectMultiScale function to detect faces in the image. The detectMultiScale function will return a rectangular list containing the position and size of the face. Finally, we draw rectangles in the original image to mark the detected faces.

2. Facial feature extraction

Facial feature extraction refers to extracting some features from face images, such as eyes, nose, mouth, etc. We use the Dlib library to implement the facial feature extraction function. Dlib is a popular C library used for machine learning, computer vision, and image processing. Although Dlib is written in C, it also provides a Python interface. We can use Python to call the functions of the Dlib library.

The following is a code example for facial feature extraction in Python:

import dlib
import cv2

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')

img = cv2.imread('test.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = detector(gray)

for face in faces:
    landmarks = predictor(gray, face)
    for n in range(68):
        x = landmarks.part(n).x
        y = landmarks.part(n).y
        cv2.circle(img, (x, y), 2, (255, 0, 0), -1)

cv2.imshow("Output", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

In this code example, we use the get_frontal_face_detector function of the Dlib library and the shape_predictor class to load a file named "shape_predictor_68_face_landmarks. dat" facial feature extractor. We then read an image called "test.jpg" and convert it to a grayscale image. Next, we use the detector function to detect faces in the image and the predictor function to extract face features. The predictor function will return a list containing 68 coordinates of facial feature points. Finally, we draw circles in the original image to mark facial feature points.

3. Face recognition

Face recognition refers to comparing the extracted features with the face information in the database to identify the identity of the face. We use the Dlib library to implement the face recognition function. The specific implementation process is as follows:

  1. Collecting face data: We need to collect some face data as our database. We can use a camera to capture this data and save it on a hard drive.

  2. Facial feature extraction: For each face image, we need to extract its features. We can use the method in the second code example to extract facial features.

  3. Build a face recognition model: We need to use the extracted facial features to build a face recognition model. We can achieve this using the face_recognition module of the Dlib library. The face_recognition module provides a function called "face_encodings" that converts a face image into a vector containing 128 features. We can save these vectors to the hard drive as our face database.

  4. Face recognition: For the face image to be recognized, we can use the method in the second code example to extract its features. We can then use the compare_faces function of the face_recognition module to compare the extracted features with those in our face database. If it matches, it means we have identified the face.

The following is a code example for implementing face recognition in Python:

import cv2
import dlib
import face_recognition

known_face_encodings = []
known_face_names = []

# Load the known faces and embeddings
for name in ["person_1", "person_2", "person_3"]:
    image = face_recognition.load_image_file(f"{name}.jpg")
    face_encoding = face_recognition.face_encodings(image)[0]
    known_face_encodings.append(face_encoding)
    known_face_names.append(name)

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

video_capture = cv2.VideoCapture(0)

while True:
    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    # Only process every other frame of video to save time
    if process_this_frame:
        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            # If a match was found in known_face_encodings, just use the first one.
            if True in matches:
                first_match_index = matches.index(True)
                name = known_face_names[first_match_index]

            face_names.append(name)

    process_this_frame = not process_this_frame

    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

In this code example, we first load some face data and use the face_recognition module to They are converted into face feature vectors. Then, we use the cv2.VideoCapture function to read the camera's video stream and use the face_recognition module to recognize faces in the video stream. Finally, we use OpenCV’s functions to display the face recognition results in the video stream.

The above is the detailed content of How to use Python to implement face recognition function?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete