Home >Common Problem >Principles of facial recognition technology
Face recognition technology refers to the use of computer technology for analysis and comparison to identify faces. Face recognition is a popular field of computer technology research, which includes face tracking detection, automatic adjustment of image magnification, night-time infrared detection, automatic adjustment of exposure intensity and other technologies.
Technical Principle
Face recognition technology consists of three parts:
(1) Face detection
Face detection refers to determining whether there is a face image in dynamic scenes and complex backgrounds, and isolating such face images.
Generally there are the following methods:
①Reference template method
First design one or several standard face templates, and then calculate the test The degree of matching between the collected samples and the standard template, and the threshold is used to determine whether there is a face;
②Face rule method
Since the face has certain structural distribution characteristics, the so-called The face rule method is to extract these features to generate corresponding rules to determine whether the test sample contains a face;
③Sample learning method
This method uses artificial neural networks in pattern recognition The method is to generate a classifier by learning face image sample sets and non-face image sample sets;
④Skin color model method
This method is based on the relatively concentrated distribution of facial skin color in the color space. to detect the rules.
⑤Feature sub-face method
This method treats all face image sets as a face image subspace, and determines whether it is based on the distance between the detection sample and its projection in the subspace. There is an image of existence.
It is worth mentioning that the above five methods can also be used comprehensively in actual detection systems.
Recommended courses: PHP Tutorial.
(2) Face tracking
Face tracking refers to dynamic target tracking of detected faces. Specifically, a model-based method or a method based on a combination of motion and model is used. In addition, using skin color model tracking is also a simple and effective method.
(3) Face comparison
Face comparison is to confirm the identity of the detected facial image or to perform target search in the facial image database. This actually means that the sampled face images are compared with the stock face images in sequence and the best matching object is found. Therefore, the description of the facial image determines the specific method and performance of facial image recognition.
Mainly use two description methods: feature vector and facial pattern template:
①Feature vector method
This method is to first determine the eye iris and nose The size, position, distance and other attributes of facial features such as corners of mouth and facial features are then calculated, and then their geometric feature quantities are calculated, and these feature quantities form a feature vector that describes the facial image.
②Face pattern template method
This method is to store a number of standard facial image templates or facial organ templates in the library. When comparing, all pixels of the sampled facial image are compared with All templates in the library are matched using normalized correlation measures. In addition, there are methods that use autocorrelation networks for pattern recognition or combine features with templates.
The core of face recognition technology is actually "local human body feature analysis" and "graphic/neural recognition algorithm." This algorithm is a method that uses various organs and characteristic parts of the human face. For example, corresponding geometric relationship multi-data formation identification parameters are compared, judged and confirmed with all original parameters in the database. Generally, the judgment time is required to be less than 1 second.
The recognition process
is generally divided into three steps:
(1) First create a face image file. That is, a camera is used to collect facial image files of the faces of unit personnel or take their photos to form facial image files, and these facial image files are generated into facial print (Faceprint) codes and stored.
(2) Get the current human face image. That is, use a camera to capture the face image of the current person entering or exiting, or take a photo to input, and generate a facial pattern code from the current face image file.
(3) Compare the current facial pattern code with the archive inventory. That is, the facial pattern code of the current facial image is retrieved and compared with the facial pattern code in the archive inventory. The above-mentioned "facial pattern encoding" method works based on the essential characteristics and beginnings of the human face. This facial pattern encoding is robust against changes in light, skin tone, facial hair, hairstyle, eyewear, expression and posture, allowing it to accurately identify an individual from millions of people. The face recognition process can be completed automatically, continuously, and in real time using ordinary image processing equipment.
Technical Process
The face recognition system mainly includes four components, namely: face image collection and detection, face image preprocessing, face image Feature extraction and matching and recognition.
Face image collection and detection
Face image collection: Different face images can be collected through camera lenses, such as static images, dynamic images, different The location, different expressions and other aspects can be well collected. When the user is within the shooting range of the collection device, the collection device will automatically search and capture the user's face image.
Face detection: Face detection is mainly used for preprocessing of face recognition in practice, that is, to accurately calibrate the position and size of the face in the image. The pattern features contained in face images are very rich, such as histogram features, color features, template features, structural features and Haar features, etc. Face detection is to pick out the useful information and use these features to achieve face detection.
The mainstream face detection method uses the Adaboost learning algorithm based on the above features. The Adaboost algorithm is a method used for classification. It combines some weaker classification methods to form a new and stronger one. Classification.
In the face detection process, the Adaboost algorithm is used to select some rectangular features (weak classifiers) that best represent the face, and the weak classifier is constructed into a strong classifier according to the weighted voting method, and then the training The obtained several strong classifiers are connected in series to form a cascade structure cascade classifier, which effectively improves the detection speed of the classifier.
Face image preprocessing
Face image preprocessing: Face image preprocessing is based on the face detection results, processing the image and finally serving it in the feature extraction process. The original image acquired by the system is often not usable directly due to various conditions and random interference. Image preprocessing such as grayscale correction and noise filtering must be performed on it in the early stages of image processing. For face images, the preprocessing process mainly includes light compensation, grayscale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of face images.
Face image feature extraction
Face image feature extraction: The features that can be used by the face recognition system are usually divided into visual features, pixel statistical features, and face image features. Transformation coefficient features, face image algebra features, etc. Facial feature extraction is performed on certain features of the human face. Facial feature extraction, also called face representation, is the process of feature modeling of faces. Facial feature extraction methods can be summarized into two categories: one is a representation method based on knowledge; the other is a representation method based on algebraic features or statistical learning.
The knowledge-based representation method is mainly based on the shape description of the facial organs and the distance characteristics between them to obtain feature data that is helpful for face classification. Its feature components usually include the Euclidean between feature points. Distance, curvature, angle, etc. The human face is composed of eyes, nose, mouth, chin and other parts. The geometric description of these parts and the structural relationship between them can be used as important features for identifying human faces. These features are called geometric features. Knowledge-based face representation mainly includes methods based on geometric features and template matching methods.
Face image matching and recognition
Face image matching and recognition: The extracted feature data of the face image is searched and matched with the feature template stored in the database. By setting a threshold, when the similarity exceeds this threshold, the matching result is output. Face recognition is to compare the facial features to be recognized with the facial feature templates that have been obtained, and judge the identity information of the face based on the degree of similarity. This process is divided into two categories: one is confirmation, which is a one-to-one image comparison process, and the other is identification, which is a one-to-many image matching and comparison process.
The above is the detailed content of Principles of facial recognition technology. For more information, please follow other related articles on the PHP Chinese website!