opencv/samples/dnn/face_match.py
Yuantao Feng 34d359fe03
Merge pull request #20422 from fengyuentau:dnn_face
Add DNN-based face detection and face recognition into modules/objdetect

* Add DNN-based face detector impl and interface

* Add a sample for DNN-based face detector

* add recog

* add notes

* move samples from samples/cpp to samples/dnn

* add documentation for dnn_face

* add set/get methods for input size, nms & score threshold and topk

* remove the DNN prefix from the face detector and face recognizer

* remove default values in the constructor of impl

* regenerate priors after setting input size

* two filenames for readnet

* Update face.hpp

* Update face_recognize.cpp

* Update face_match.cpp

* Update face.hpp

* Update face_recognize.cpp

* Update face_match.cpp

* Update face_recognize.cpp

* Update dnn_face.markdown

* Update dnn_face.markdown

* Update face.hpp

* Update dnn_face.markdown

* add regression test for face detection

* remove underscore prefix; fix warnings

* add reference & acknowledgement for face detection

* Update dnn_face.markdown

* Update dnn_face.markdown

* Update ts.hpp

* Update test_face.cpp

* Update face_match.cpp

* fix a compile error for python interface; add python examples for face detection and recognition

* Major changes for Vadim's comments:

* Replace class name FaceDetector with FaceDetectorYN in related failes

* Declare local mat before loop in modules/objdetect/src/face_detect.cpp

* Make input image and save flag optional in samples/dnn/face_detect(.cpp, .py)

* Add camera support in samples/dnn/face_detect(.cpp, .py)

* correct file paths for regression test

* fix convertion warnings; remove extra spaces

* update face_recog

* Update dnn_face.markdown

* Fix warnings and errors for the default CI reports:

* Remove trailing white spaces and extra new lines.

* Fix convertion warnings for windows and iOS.

* Add braces around initialization of subobjects.

* Fix warnings and errors for the default CI systems:

* Add prefix 'FR_' for each value name in enum DisType to solve the
redefinition error for iOS compilation; Modify other code accordingly

* Add bookmark '#tutorial_dnn_face' to solve warnings from doxygen

* Correct documentations to solve warnings from doxygen

* update FaceRecognizerSF

* Fix the error for CI to find ONNX models correctly

* add suffix f to float assignments

* add backend & target options for initializing face recognizer

* add checkeq for checking input size and preset size

* update test and threshold

* changes in response to alalek's comments:

* fix typos in samples/dnn/face_match.py

* import numpy before importing cv2

* add documentation to .setInputSize()

* remove extra include in face_recognize.cpp

* fix some bugs

* Update dnn_face.markdown

* update thresholds; remove useless code

* add time suffix to YuNet filename in test

* objdetect: update test code
2021-10-08 19:13:49 +00:00

57 lines
2.3 KiB
Python

import argparse
import numpy as np
import cv2 as cv
parser = argparse.ArgumentParser()
parser.add_argument('--input1', '-i1', type=str, help='Path to the input image1.')
parser.add_argument('--input2', '-i2', type=str, help='Path to the input image2.')
parser.add_argument('--face_detection_model', '-fd', type=str, help='Path to the face detection model. Download the model at https://github.com/ShiqiYu/libfacedetection.train/tree/master/tasks/task1/onnx.')
parser.add_argument('--face_recognition_model', '-fr', type=str, help='Path to the face recognition model. Download the model at https://drive.google.com/file/d/1ClK9WiB492c5OZFKveF3XiHCejoOxINW/view.')
args = parser.parse_args()
# Read the input image
img1 = cv.imread(args.input1)
img2 = cv.imread(args.input2)
# Instantiate face detector and recognizer
detector = cv.FaceDetectorYN.create(
args.face_detection_model,
"",
(img1.shape[1], img1.shape[0])
)
recognizer = cv.FaceRecognizerSF.create(
args.face_recognition_model,
""
)
# Detect face
detector.setInputSize((img1.shape[1], img1.shape[0]))
face1 = detector.detect(img1)
detector.setInputSize((img2.shape[1], img2.shape[0]))
face2 = detector.detect(img2)
assert face1[1].shape[0] > 0, 'Cannot find a face in {}'.format(args.input1)
assert face2[1].shape[0] > 0, 'Cannot find a face in {}'.format(args.input2)
# Align faces
face1_align = recognizer.alignCrop(img1, face1[1][0])
face2_align = recognizer.alignCrop(img2, face2[1][0])
# Extract features
face1_feature = recognizer.faceFeature(face1_align)
face2_feature = recognizer.faceFeature(face2_align)
# Calculate distance (0: cosine, 1: L2)
cosine_similarity_threshold = 0.363
cosine_score = recognizer.faceMatch(face1_feature, face2_feature, 0)
msg = 'different identities'
if cosine_score >= cosine_similarity_threshold:
msg = 'the same identity'
print('They have {}. Cosine Similarity: {}, threshold: {} (higher value means higher similarity, max 1.0).'.format(msg, cosine_score, cosine_similarity_threshold))
l2_similarity_threshold = 1.128
l2_score = recognizer.faceMatch(face1_feature, face2_feature, 1)
msg = 'different identities'
if l2_score <= l2_similarity_threshold:
msg = 'the same identity'
print('They have {}. NormL2 Distance: {}, threshold: {} (lower value means higher similarity, min 0.0).'.format(msg, l2_score, l2_similarity_threshold))