Home >Technology peripherals >AI >Using AI to find loved ones separated after the Holocaust! Google engineers develop facial recognition program that can identify more than 700,000 old World War II photos
Has new business opened up in the field of AI facial recognition?
This time, it was about identifying faces in old photos from World War II.
Recently, Daniel Patt, a software engineer from Google, developed an AI face recognition technology called N2N (Numbers to Names), which can identify photos of Europe before World War II and the Holocaust, and Relate them to modern people.
In 2016, Pat came up with an idea when he visited the Memorial Museum of Polish Jews in Warsaw.
Could these strange faces be related to him by blood?
Three of his grandparents were Holocaust survivors from Poland, he thought. Help your grandmother find photos of family members killed by the Nazis.
During World War II, due to the large number of Polish Jews who were imprisoned in different concentration camps, many of them were missing.
Just through a yellowed photo, it is difficult to identify the face in it, let alone find one's lost relative.
So, he returned home and immediately turned this idea into reality.
The original idea of the software was to collect image information of faces through a database and use artificial intelligence algorithms to help match the top ten options with the highest similarity.
Most of the image data comes from The US Holocaust Memorial Museum (The US Holocaust Memorial Museum), with more than a million images from databases across the country.
Users only need to select the image in the computer file and click upload, and the system will automatically filter out the top ten options with the highest matching images.
In addition, users can also click on the source address to view the year, location, collection and other information of the picture.
One drawback is that if you enter modern character images, the search results may be outrageous.
Is this the result? (Black question mark)
In short, the system functions still need to be improved.
In addition, Patt works with other teams of software engineers and data scientists at Google to improve the scope and accuracy of searches.
Due to the risk of privacy leakage in the facial recognition system, Patt said, "We do not make any evaluation of identity. We are only responsible for presenting the results using similarity scores and letting users make their own judgments."
So how does this technology recognize faces?
Initially, face recognition technology had to start with "how to determine whether the detected image is a face."
In 2001, computer vision researchers Paul Viola and Michael Jones proposed a framework to detect faces in real time with high accuracy.
This framework can be based on training models to understand "what is a face and what is not a face".
After training, the model extracts specific features and then stores these features in a file so that features in new images can be compared with previously stored features at various stages.
To help ensure accuracy, the algorithm needs to be trained on "a large data set containing hundreds of thousands of positive and negative images," which improves the algorithm's ability to determine whether a face is in an image and where it is.
If the image under study passes each stage of feature comparison, a face has been detected and the operation can proceed.
Although the Viola-Jones framework is highly accurate for face recognition in real-time applications, it has certain limitations.
For example, the framework may not work if a face is wearing a mask, or if a face is not oriented correctly.
To help eliminate the shortcomings of the Viola-Jones framework and improve face detection, they developed additional algorithms.
Such as region-based convolutional neural network (R-CNN) and single shot detector (SSD) to help improve the process.
Convolutional neural network (CNN) is an artificial neural network used for image recognition and processing, specifically designed to process pixel data.
R-CNN generates region proposals on a CNN framework to localize and classify objects in images.
While methods based on region proposal networks (such as R-CNN) require two shots - one to generate region proposals and another to detect each proposed object - SSD only requires one shot to detect multiple objects in an image. Therefore, SSD is significantly faster than R-CNN.
In recent years, the advantages of face recognition technology driven by deep learning models are significantly better than traditional computer vision methods.
Early face recognition mostly used traditional machine learning algorithms, and research focused more on how to extract more discriminating features and how to align faces more effectively.
With the deepening of research, the performance improvement of traditional machine learning algorithm face recognition on two-dimensional images has gradually reached a bottleneck.
People began to study the problem of face recognition in videos, or combined with three-dimensional model methods to further improve the performance of face recognition, while a few scholars began to study the problem of three-dimensional face recognition.
On the most famous LFW public library, the deep learning algorithm has broken through the bottleneck of traditional machine learning algorithms in face recognition performance on two-dimensional images, and for the first time has improved the recognition rate. Improved to more than 97%.
That is to use the "high-dimensional model established by the CNN network" to directly extract effective identification features from the input face image, and directly calculate the cosine distance for face recognition.
Face detection has evolved from basic computer vision techniques to advances in machine learning (ML) to increasingly complex artificial neural networks (ANN) and related techniques, with the result being continued performance improvements.
Now it plays an important role as the first step in many critical applications - including facial tracking, facial analysis and facial recognition.
During World War II, China also suffered the trauma of the war, and many of the people in the photos at that time were no longer identifiable.
Those who have been traumatized by the war have many relatives and friends whose whereabouts are unknown.
The development of this technology may help people uncover the dusty years and find some comfort for people in the past.
Reference: https://www.timesofisrael.com/google-engineer-identifies-anonymous-faces-in-wwii-photos-with-ai-facial-recognition/
The above is the detailed content of Using AI to find loved ones separated after the Holocaust! Google engineers develop facial recognition program that can identify more than 700,000 old World War II photos. For more information, please follow other related articles on the PHP Chinese website!