add deepface master src

This commit is contained in:
carl 2024-09-04 00:14:08 +00:00
parent afb3f1deff
commit cb8fd60779
149 changed files with 8565 additions and 0 deletions

View File

@ -0,0 +1,54 @@
# base image
FROM python:3.8
LABEL org.opencontainers.image.source https://github.com/serengil/deepface
# -----------------------------------
# create required folder
RUN mkdir /app
RUN mkdir /app/deepface
# -----------------------------------
# switch to application directory
WORKDIR /app
# -----------------------------------
# update image os
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
# -----------------------------------
# Copy required files from repo into image
COPY ./deepface /app/deepface
COPY ./api/app.py /app/
COPY ./api/api.py /app/
COPY ./api/routes.py /app/
COPY ./api/service.py /app/
COPY ./requirements.txt /app/
COPY ./setup.py /app/
COPY ./README.md /app/
# -----------------------------------
# if you plan to use a GPU, you should install the 'tensorflow-gpu' package
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org tensorflow-gpu
# -----------------------------------
# install deepface from pypi release (might be out-of-date)
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org deepface
# -----------------------------------
# install deepface from source code (always up-to-date)
RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org -e .
# -----------------------------------
# some packages are optional in deepface. activate if your task depends on one.
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org cmake==3.24.1.1
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org dlib==19.20.0
# RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org lightgbm==2.3.1
# -----------------------------------
# environment variables
ENV PYTHONUNBUFFERED=1
# -----------------------------------
# run the app (re-configure port if necessary)
EXPOSE 5000
CMD ["gunicorn", "--workers=1", "--timeout=3600", "--bind=0.0.0.0:5000", "app:create_app()"]

View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2019 Sefik Ilkin Serengil
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,5 @@
test:
cd tests && python -m pytest . -s --disable-warnings
lint:
python -m pylint deepface/ --fail-under=10

View File

@ -0,0 +1,375 @@
# deepface
<div align="center">
[![PyPI Downloads](https://static.pepy.tech/personalized-badge/deepface?period=total&units=international_system&left_color=grey&right_color=blue&left_text=pypi%20downloads)](https://pepy.tech/project/deepface)
[![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/deepface?color=green&label=conda%20downloads)](https://anaconda.org/conda-forge/deepface)
[![Stars](https://img.shields.io/github/stars/serengil/deepface?color=yellow&style=flat)](https://github.com/serengil/deepface/stargazers)
[![License](http://img.shields.io/:license-MIT-green.svg?style=flat)](https://github.com/serengil/deepface/blob/master/LICENSE)
[![Tests](https://github.com/serengil/deepface/actions/workflows/tests.yml/badge.svg)](https://github.com/serengil/deepface/actions/workflows/tests.yml)
[![Blog](https://img.shields.io/:blog-sefiks.com-blue.svg?style=flat&logo=wordpress)](https://sefiks.com)
[![YouTube](https://img.shields.io/:youtube-@sefiks-red.svg?style=flat&logo=youtube)](https://www.youtube.com/@sefiks?sub_confirmation=1)
[![Twitter](https://img.shields.io/:follow-@serengil-blue.svg?style=flat&logo=twitter)](https://twitter.com/intent/user?screen_name=serengil)
[![Support me on Patreon](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Fshieldsio-patreon.vercel.app%2Fapi%3Fusername%3Dserengil%26type%3Dpatrons&style=flat)](https://www.patreon.com/serengil?repo=deepface)
[![GitHub Sponsors](https://img.shields.io/github/sponsors/serengil?logo=GitHub&color=lightgray)](https://github.com/sponsors/serengil)
[![DOI](http://img.shields.io/:DOI-10.1109/ASYU50717.2020.9259802-blue.svg?style=flat)](https://doi.org/10.1109/ASYU50717.2020.9259802)
[![DOI](http://img.shields.io/:DOI-10.1109/ICEET53442.2021.9659697-blue.svg?style=flat)](https://doi.org/10.1109/ICEET53442.2021.9659697)
</div>
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-icon-labeled.png" width="200" height="240"></p>
Deepface is a lightweight [face recognition](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) and facial attribute analysis ([age](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [gender](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [emotion](https://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/) and [race](https://sefiks.com/2019/11/11/race-and-ethnicity-prediction-in-keras/)) framework for python. It is a hybrid face recognition framework wrapping **state-of-the-art** models: [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/), [`Google FaceNet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/), [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/), [`Facebook DeepFace`](https://sefiks.com/2020/02/17/face-recognition-with-facebook-deepface-in-keras/), [`DeepID`](https://sefiks.com/2020/06/16/face-recognition-with-deepid-in-keras/), [`ArcFace`](https://sefiks.com/2020/12/14/deep-face-recognition-with-arcface-in-keras-and-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/) and `SFace`.
Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
## Installation [![PyPI](https://img.shields.io/pypi/v/deepface.svg)](https://pypi.org/project/deepface/) [![Conda](https://img.shields.io/conda/vn/conda-forge/deepface.svg)](https://anaconda.org/conda-forge/deepface)
The easiest way to install deepface is to download it from [`PyPI`](https://pypi.org/project/deepface/). It's going to install the library itself and its prerequisites as well.
```shell
$ pip install deepface
```
Secondly, DeepFace is also available at [`Conda`](https://anaconda.org/conda-forge/deepface). You can alternatively install the package via conda.
```shell
$ conda install -c conda-forge deepface
```
Thirdly, you can install deepface from its source code.
```shell
$ git clone https://github.com/serengil/deepface.git
$ cd deepface
$ pip install -e .
```
Then you will be able to import the library and use its functionalities.
```python
from deepface import DeepFace
```
**Facial Recognition** - [`Demo`](https://youtu.be/WnUVYQP4h44)
A modern [**face recognition pipeline**](https://sefiks.com/2020/05/01/a-gentle-introduction-to-face-recognition-in-deep-learning/) consists of 5 common stages: [detect](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [align](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [normalize](https://sefiks.com/2020/11/20/facial-landmarks-for-face-recognition-with-dlib/), [represent](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) and [verify](https://sefiks.com/2020/05/22/fine-tuning-the-threshold-in-face-recognition/). While Deepface handles all these common stages in the background, you dont need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
**Face Verification** - [`Demo`](https://youtu.be/KRCvkNCOphE)
This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or base64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
```python
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
```
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-1.jpg" width="95%" height="95%"></p>
Verification function can also handle many faces in the face pairs. In this case, the most similar faces will be compared.
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/verify-many-faces.jpg" width="95%" height="95%"></p>
**Face recognition** - [`Demo`](https://youtu.be/Hrjp-EStM_s)
[Face recognition](https://sefiks.com/2020/05/25/large-scale-face-recognition-for-deep-learning/) requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return list of pandas data frame as output. Meanwhile, facial embeddings of the facial database are stored in a pickle file to be searched faster in next time. Result is going to be the size of faces appearing in the source image. Besides, target images in the database can have many faces as well.
```python
dfs = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
```
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-6-v2.jpg" width="95%" height="95%"></p>
**Embeddings**
Face recognition models basically represent facial images as multi-dimensional vectors. Sometimes, you need those embedding vectors directly. DeepFace comes with a dedicated representation function. Represent function returns a list of embeddings. Result is going to be the size of faces appearing in the image path.
```python
embedding_objs = DeepFace.represent(img_path = "img.jpg")
```
This function returns an array as embedding. The size of the embedding array would be different based on the model name. For instance, VGG-Face is the default model and it represents facial images as 4096 dimensional vectors.
```python
embedding = embedding_objs[0]["embedding"]
assert isinstance(embedding, list)
assert model_name = "VGG-Face" and len(embedding) == 4096
```
Here, embedding is also [plotted](https://sefiks.com/2020/05/01/a-gentle-introduction-to-face-recognition-in-deep-learning/) with 4096 slots horizontally. Each slot is corresponding to a dimension value in the embedding vector and dimension value is explained in the colorbar on the right. Similar to 2D barcodes, vertical dimension stores no information in the illustration.
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/embedding.jpg" width="95%" height="95%"></p>
**Face recognition models** - [`Demo`](https://youtu.be/i_MOwvhbLdI)
Deepface is a **hybrid** face recognition package. It currently wraps many **state-of-the-art** face recognition models: [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) , [`Google FaceNet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/), [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/), [`Facebook DeepFace`](https://sefiks.com/2020/02/17/face-recognition-with-facebook-deepface-in-keras/), [`DeepID`](https://sefiks.com/2020/06/16/face-recognition-with-deepid-in-keras/), [`ArcFace`](https://sefiks.com/2020/12/14/deep-face-recognition-with-arcface-in-keras-and-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/) and `SFace`. The default configuration uses VGG-Face model.
```python
models = [
"VGG-Face",
"Facenet",
"Facenet512",
"OpenFace",
"DeepFace",
"DeepID",
"ArcFace",
"Dlib",
"SFace",
]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
model_name = models[0]
)
#face recognition
dfs = DeepFace.find(img_path = "img1.jpg",
db_path = "C:/workspace/my_db",
model_name = models[1]
)
#embeddings
embedding_objs = DeepFace.represent(img_path = "img.jpg",
model_name = models[2]
)
```
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/model-portfolio-v8.jpg" width="95%" height="95%"></p>
FaceNet, VGG-Face, ArcFace and Dlib are [overperforming](https://youtu.be/i_MOwvhbLdI) ones based on experiments. You can find out the scores of those models below on both [Labeled Faces in the Wild](https://sefiks.com/2020/08/27/labeled-faces-in-the-wild-for-face-recognition/) and YouTube Faces in the Wild data sets declared by its creators.
| Model | LFW Score | YTF Score |
| --- | --- | --- |
| Facenet512 | 99.65% | - |
| SFace | 99.60% | - |
| ArcFace | 99.41% | - |
| Dlib | 99.38 % | - |
| Facenet | 99.20% | - |
| VGG-Face | 98.78% | 97.40% |
| *Human-beings* | *97.53%* | - |
| OpenFace | 93.80% | - |
| DeepID | - | 97.05% |
**Similarity**
Face recognition models are regular [convolutional neural networks](https://sefiks.com/2018/03/23/convolutional-autoencoder-clustering-images-with-neural-networks/) and they are responsible to represent faces as vectors. We expect that a face pair of same person should be [more similar](https://sefiks.com/2020/05/22/fine-tuning-the-threshold-in-face-recognition/) than a face pair of different persons.
Similarity could be calculated by different metrics such as [Cosine Similarity](https://sefiks.com/2018/08/13/cosine-similarity-in-machine-learning/), Euclidean Distance and L2 form. The default configuration uses cosine similarity.
```python
metrics = ["cosine", "euclidean", "euclidean_l2"]
#face verification
result = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
distance_metric = metrics[1]
)
#face recognition
dfs = DeepFace.find(img_path = "img1.jpg",
db_path = "C:/workspace/my_db",
distance_metric = metrics[2]
)
```
Euclidean L2 form [seems](https://youtu.be/i_MOwvhbLdI) to be more stable than cosine and regular Euclidean distance based on experiments.
**Facial Attribute Analysis** - [`Demo`](https://youtu.be/GT2UeN85BdA)
Deepface also comes with a strong facial attribute analysis module including [`age`](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [`gender`](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [`facial expression`](https://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/) (including angry, fear, neutral, sad, disgust, happy and surprise) and [`race`](https://sefiks.com/2019/11/11/race-and-ethnicity-prediction-in-keras/) (including asian, white, middle eastern, indian, latino and black) predictions. Result is going to be the size of faces appearing in the source image.
```python
objs = DeepFace.analyze(img_path = "img4.jpg",
actions = ['age', 'gender', 'race', 'emotion']
)
```
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-2.jpg" width="95%" height="95%"></p>
Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its [tutorial](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/).
**Face Detectors** - [`Demo`](https://youtu.be/GZ2p2hj2H5k)
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. [`OpenCV`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [`SSD`](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), [`MTCNN`](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/), [`Faster MTCNN`](https://github.com/timesler/facenet-pytorch), [`RetinaFace`](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/), [`MediaPipe`](https://sefiks.com/2022/01/14/deep-face-detection-with-mediapipe/), [`YOLOv8 Face`](https://github.com/derronqi/yolov8-face) and [`YuNet`](https://github.com/ShiqiYu/libfacedetection) detectors are wrapped in deepface.
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/detector-portfolio-v5.jpg" width="95%" height="95%"></p>
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
```python
backends = [
'opencv',
'ssd',
'dlib',
'mtcnn',
'retinaface',
'mediapipe',
'yolov8',
'yunet',
'fastmtcnn',
]
#face verification
obj = DeepFace.verify(img1_path = "img1.jpg",
img2_path = "img2.jpg",
detector_backend = backends[0]
)
#face recognition
dfs = DeepFace.find(img_path = "img.jpg",
db_path = "my_db",
detector_backend = backends[1]
)
#embeddings
embedding_objs = DeepFace.represent(img_path = "img.jpg",
detector_backend = backends[2]
)
#facial analysis
demographies = DeepFace.analyze(img_path = "img4.jpg",
detector_backend = backends[3]
)
#face detection and alignment
face_objs = DeepFace.extract_faces(img_path = "img.jpg",
target_size = (224, 224),
detector_backend = backends[4]
)
```
Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-detectors-v3.jpg" width="90%" height="90%"></p>
[RetinaFace](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/) and [MTCNN](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/) seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/retinaface-results.jpeg" width="90%" height="90%">
<br><em>The Yellow Angels - Fenerbahce Women's Volleyball Team</em>
</p>
You can find out more about RetinaFace on this [repo](https://github.com/serengil/retinaface).
**Real Time Analysis** - [`Demo`](https://youtu.be/-c9sSJcx6wI)
You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequentially 5 frames. Then, it shows results 5 seconds.
```python
DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
```
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-3.jpg" width="90%" height="90%"></p>
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
```bash
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
```
**API** - [`Demo`](https://youtu.be/HeKCQ6U9XmI)
DeepFace serves an API as well. You can clone [`/api`](https://github.com/serengil/deepface/tree/master/api) folder and run the api via gunicorn server. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
```shell
cd scripts
./service.sh
```
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-api.jpg" width="90%" height="90%"></p>
Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Default service endpoints will be `http://localhost:5000/verify` for face recognition, `http://localhost:5000/analyze` for facial attribute analysis, and `http://localhost:5000/represent` for vector representation. You can pass input images as exact image paths on your environment, base64 encoded strings or images on web. [Here](https://github.com/serengil/deepface/tree/master/api), you can find a postman project to find out how these methods should be called.
**Dockerized Service**
You can deploy the deepface api on a kubernetes cluster with docker. The following [shell script](https://github.com/serengil/deepface/blob/master/scripts/dockerize.sh) will serve deepface on `localhost:5000`. You need to re-configure the [Dockerfile](https://github.com/serengil/deepface/blob/master/Dockerfile) if you want to change the port. Then, even if you do not have a development environment, you will be able to consume deepface services such as verify and analyze. You can also access the inside of the docker image to run deepface related commands. Please follow the instructions in the [shell script](https://github.com/serengil/deepface/blob/master/scripts/dockerize.sh).
```shell
cd scripts
./dockerize.sh
```
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-dockerized-v2.jpg" width="50%" height="50%"></p>
**Command Line Interface** - [`Demo`](https://youtu.be/PKKTAr3ts2s)
DeepFace comes with a command line interface as well. You are able to access its functions in command line as shown below. The command deepface expects the function name as 1st argument and function arguments thereafter.
```shell
#face verification
$ deepface verify -img1_path tests/dataset/img1.jpg -img2_path tests/dataset/img2.jpg
#facial analysis
$ deepface analyze -img_path tests/dataset/img1.jpg
```
You can also run these commands if you are running deepface with docker. Please follow the instructions in the [shell script](https://github.com/serengil/deepface/blob/master/scripts/dockerize.sh#L17).
## Contribution [![Tests](https://github.com/serengil/deepface/actions/workflows/tests.yml/badge.svg)](https://github.com/serengil/deepface/actions/workflows/tests.yml)
Pull requests are more than welcome! You should run the unit tests locally by running [`test/unit_tests.py`](https://github.com/serengil/deepface/blob/master/tests/unit_tests.py) before creating a PR. Once a PR sent, GitHub test workflow will be run automatically and unit test results will be available in [GitHub actions](https://github.com/serengil/deepface/actions) before approval. Besides, workflow will evaluate the code with pylint as well.
## Support
There are many ways to support a project - starring⭐ the GitHub repo is just one 🙏
You can also support this work on [Patreon](https://www.patreon.com/serengil?repo=deepface) or [GitHub Sponsors](https://github.com/sponsors/serengil).
<a href="https://www.patreon.com/serengil?repo=deepface">
<img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/patreon.png" width="30%" height="30%">
</a>
## Citation
Please cite deepface in your publications if it helps your research. Here are its BibTex entries:
If you use deepface for facial recogntion purposes, please cite the this publication.
```BibTeX
@inproceedings{serengil2020lightface,
title = {LightFace: A Hybrid Deep Face Recognition Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
organization = {IEEE}
}
```
If you use deepface for facial attribute analysis purposes such as age, gender, emotion or ethnicity prediction or face detection purposes, please cite the this publication.
```BibTeX
@inproceedings{serengil2021lightface,
title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url = {https://doi.org/10.1109/ICEET53442.2021.9659697},
organization = {IEEE}
}
```
Also, if you use deepface in your GitHub projects, please add `deepface` in the `requirements.txt`.
## Licence
DeepFace is licensed under the MIT License - see [`LICENSE`](https://github.com/serengil/deepface/blob/master/LICENSE) for more details.
DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), and [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Licence types will be inherited if you are going to use those models. Please check the license types of those models for production purposes.
DeepFace [logo](https://thenounproject.com/term/face-recognition/2965879/) is created by [Adrien Coquet](https://thenounproject.com/coquet_adrien/) and it is licensed under [Creative Commons: By Attribution 3.0 License](https://creativecommons.org/licenses/by/3.0/).

View File

@ -0,0 +1,9 @@
import argparse
import app
if __name__ == "__main__":
deepface_app = app.create_app()
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--port", type=int, default=5000, help="Port of serving api")
args = parser.parse_args()
deepface_app.run(host="0.0.0.0", port=args.port)

View File

@ -0,0 +1,9 @@
# 3rd parth dependencies
from flask import Flask
from routes import blueprint
def create_app():
app = Flask(__name__)
app.register_blueprint(blueprint)
return app

View File

@ -0,0 +1,102 @@
{
"info": {
"_postman_id": "4c0b144e-4294-4bdd-8072-bcb326b1fed2",
"name": "deepface-api",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "Represent",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "raw",
"raw": "{\n \"model_name\": \"Facenet\",\n \"img\": \"/Users/sefik/Desktop/deepface/tests/dataset/img1.jpg\"\n}",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "http://127.0.0.1:5000/represent",
"protocol": "http",
"host": [
"127",
"0",
"0",
"1"
],
"port": "5000",
"path": [
"represent"
]
}
},
"response": []
},
{
"name": "Face verification",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "raw",
"raw": " {\n \t\"img1_path\": \"/Users/sefik/Desktop/deepface/tests/dataset/img1.jpg\",\n \"img2_path\": \"/Users/sefik/Desktop/deepface/tests/dataset/img2.jpg\",\n \"model_name\": \"Facenet\",\n \"detector_backend\": \"mtcnn\",\n \"distance_metric\": \"euclidean\"\n }",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "http://127.0.0.1:5000/verify",
"protocol": "http",
"host": [
"127",
"0",
"0",
"1"
],
"port": "5000",
"path": [
"verify"
]
}
},
"response": []
},
{
"name": "Face analysis",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "raw",
"raw": "{\n \"img_path\": \"/Users/sefik/Desktop/deepface/tests/dataset/couple.jpg\",\n \"actions\": [\"age\", \"gender\", \"emotion\", \"race\"]\n}",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "http://127.0.0.1:5000/analyze",
"protocol": "http",
"host": [
"127",
"0",
"0",
"1"
],
"port": "5000",
"path": [
"analyze"
]
}
},
"response": []
}
]
}

View File

@ -0,0 +1,100 @@
from flask import Blueprint, request
import service
blueprint = Blueprint("routes", __name__)
@blueprint.route("/")
def home():
return "<h1>Welcome to DeepFace API!</h1>"
@blueprint.route("/represent", methods=["POST"])
def represent():
input_args = request.get_json()
if input_args is None:
return {"message": "empty input set passed"}
img_path = input_args.get("img")
if img_path is None:
return {"message": "you must pass img_path input"}
model_name = input_args.get("model_name", "VGG-Face")
detector_backend = input_args.get("detector_backend", "opencv")
enforce_detection = input_args.get("enforce_detection", True)
align = input_args.get("align", True)
obj = service.represent(
img_path=img_path,
model_name=model_name,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
)
return obj
@blueprint.route("/verify", methods=["POST"])
def verify():
input_args = request.get_json()
if input_args is None:
return {"message": "empty input set passed"}
img1_path = input_args.get("img1_path")
img2_path = input_args.get("img2_path")
if img1_path is None:
return {"message": "you must pass img1_path input"}
if img2_path is None:
return {"message": "you must pass img2_path input"}
model_name = input_args.get("model_name", "VGG-Face")
detector_backend = input_args.get("detector_backend", "opencv")
enforce_detection = input_args.get("enforce_detection", True)
distance_metric = input_args.get("distance_metric", "cosine")
align = input_args.get("align", True)
verification = service.verify(
img1_path=img1_path,
img2_path=img2_path,
model_name=model_name,
detector_backend=detector_backend,
distance_metric=distance_metric,
align=align,
enforce_detection=enforce_detection,
)
verification["verified"] = str(verification["verified"])
return verification
@blueprint.route("/analyze", methods=["POST"])
def analyze():
input_args = request.get_json()
if input_args is None:
return {"message": "empty input set passed"}
img_path = input_args.get("img_path")
if img_path is None:
return {"message": "you must pass img_path input"}
detector_backend = input_args.get("detector_backend", "opencv")
enforce_detection = input_args.get("enforce_detection", True)
align = input_args.get("align", True)
actions = input_args.get("actions", ["age", "gender", "emotion", "race"])
demographies = service.analyze(
img_path=img_path,
actions=actions,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
)
return demographies

View File

@ -0,0 +1,42 @@
from deepface import DeepFace
def represent(img_path, model_name, detector_backend, enforce_detection, align):
result = {}
embedding_objs = DeepFace.represent(
img_path=img_path,
model_name=model_name,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
)
result["results"] = embedding_objs
return result
def verify(
img1_path, img2_path, model_name, detector_backend, distance_metric, enforce_detection, align
):
obj = DeepFace.verify(
img1_path=img1_path,
img2_path=img2_path,
model_name=model_name,
detector_backend=detector_backend,
distance_metric=distance_metric,
align=align,
enforce_detection=enforce_detection,
)
return obj
def analyze(img_path, actions, detector_backend, enforce_detection, align):
result = {}
demographies = DeepFace.analyze(
img_path=img_path,
actions=actions,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
)
result["results"] = demographies
return result

View File

@ -0,0 +1,485 @@
# common dependencies
import os
import warnings
import logging
from typing import Any, Dict, List, Tuple, Union
# 3rd party dependencies
import numpy as np
import pandas as pd
import tensorflow as tf
from deprecated import deprecated
# package dependencies
from deepface.commons import functions
from deepface.commons.logger import Logger
from deepface.modules import (
modeling,
representation,
verification,
recognition,
demography,
detection,
realtime,
)
# pylint: disable=no-else-raise, simplifiable-if-expression
logger = Logger(module="DeepFace")
# -----------------------------------
# configurations for dependencies
warnings.filterwarnings("ignore")
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 2:
tf.get_logger().setLevel(logging.ERROR)
from tensorflow.keras.models import Model
else:
from keras.models import Model
# -----------------------------------
def build_model(model_name: str) -> Union[Model, Any]:
"""
This function builds a deepface model
Parameters:
model_name (string): face recognition or facial attribute model
VGG-Face, Facenet, OpenFace, DeepFace, DeepID for face recognition
Age, Gender, Emotion, Race for facial attributes
Returns:
built deepface model ( (tf.)keras.models.Model )
"""
return modeling.build_model(model_name=model_name)
def verify(
img1_path: Union[str, np.ndarray],
img2_path: Union[str, np.ndarray],
model_name: str = "VGG-Face",
detector_backend: str = "opencv",
distance_metric: str = "cosine",
enforce_detection: bool = True,
align: bool = True,
normalization: str = "base",
) -> Dict[str, Any]:
"""
This function verifies an image pair is same person or different persons. In the background,
verification function represents facial images as vectors and then calculates the similarity
between those vectors. Vectors of same person images should have more similarity (or less
distance) than vectors of different persons.
Parameters:
img1_path, img2_path: exact image path as string. numpy array (BGR) or based64 encoded
images are also welcome. If one of pair has more than one face, then we will compare the
face pair with max similarity.
model_name (str): VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID, Dlib
, ArcFace and SFace
distance_metric (string): cosine, euclidean, euclidean_l2
enforce_detection (boolean): If no face could not be detected in an image, then this
function will return exception by default. Set this to False not to have this exception.
This might be convenient for low resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8.
align (boolean): alignment according to the eye positions.
normalization (string): normalize the input image before feeding to model
Returns:
Verify function returns a dictionary.
{
"verified": True
, "distance": 0.2563
, "max_threshold_to_verify": 0.40
, "model": "VGG-Face"
, "similarity_metric": "cosine"
, 'facial_areas': {
'img1': {'x': 345, 'y': 211, 'w': 769, 'h': 769},
'img2': {'x': 318, 'y': 534, 'w': 779, 'h': 779}
}
, "time": 2
}
"""
return verification.verify(
img1_path=img1_path,
img2_path=img2_path,
model_name=model_name,
detector_backend=detector_backend,
distance_metric=distance_metric,
enforce_detection=enforce_detection,
align=align,
normalization=normalization,
)
def analyze(
img_path: Union[str, np.ndarray],
actions: Union[tuple, list] = ("emotion", "age", "gender", "race"),
enforce_detection: bool = True,
detector_backend: str = "opencv",
align: bool = True,
silent: bool = False,
) -> List[Dict[str, Any]]:
"""
This function analyzes facial attributes including age, gender, emotion and race.
In the background, analysis function builds convolutional neural network models to
classify age, gender, emotion and race of the input image.
Parameters:
img_path: exact image path, numpy array (BGR) or base64 encoded image could be passed.
If source image has more than one face, then result will be size of number of faces
appearing in the image.
actions (tuple): The default is ('age', 'gender', 'emotion', 'race'). You can drop
some of those attributes.
enforce_detection (bool): The function throws exception if no face detected by default.
Set this to False if you don't want to get exception. This might be convenient for low
resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8.
align (boolean): alignment according to the eye positions.
silent (boolean): disable (some) log messages
Returns:
The function returns a list of dictionaries for each face appearing in the image.
[
{
"region": {'x': 230, 'y': 120, 'w': 36, 'h': 45},
"age": 28.66,
'face_confidence': 0.9993908405303955,
"dominant_gender": "Woman",
"gender": {
'Woman': 99.99407529830933,
'Man': 0.005928758764639497,
}
"dominant_emotion": "neutral",
"emotion": {
'sad': 37.65260875225067,
'angry': 0.15512987738475204,
'surprise': 0.0022171278033056296,
'fear': 1.2489334680140018,
'happy': 4.609785228967667,
'disgust': 9.698561953541684e-07,
'neutral': 56.33133053779602
}
"dominant_race": "white",
"race": {
'indian': 0.5480832420289516,
'asian': 0.7830780930817127,
'latino hispanic': 2.0677512511610985,
'black': 0.06337375962175429,
'middle eastern': 3.088453598320484,
'white': 93.44925880432129
}
}
]
"""
return demography.analyze(
img_path=img_path,
actions=actions,
enforce_detection=enforce_detection,
detector_backend=detector_backend,
align=align,
silent=silent,
)
def find(
img_path: Union[str, np.ndarray],
db_path: str,
model_name: str = "VGG-Face",
distance_metric: str = "cosine",
enforce_detection: bool = True,
detector_backend: str = "opencv",
align: bool = True,
normalization: str = "base",
silent: bool = False,
) -> List[pd.DataFrame]:
"""
This function applies verification several times and find the identities in a database
Parameters:
img_path: exact image path, numpy array (BGR) or based64 encoded image.
Source image can have many faces. Then, result will be the size of number of
faces in the source image.
db_path (string): You should store some image files in a folder and pass the
exact folder path to this. A database image can also have many faces.
Then, all detected faces in db side will be considered in the decision.
model_name (string): VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID,
Dlib, ArcFace, SFace or Ensemble
distance_metric (string): cosine, euclidean, euclidean_l2
enforce_detection (bool): The function throws exception if a face could not be detected.
Set this to False if you don't want to get exception. This might be convenient for low
resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8.
align (boolean): alignment according to the eye positions.
normalization (string): normalize the input image before feeding to model
silent (boolean): disable some logging and progress bars
Returns:
This function returns list of pandas data frame. Each item of the list corresponding to
an identity in the img_path.
"""
return recognition.find(
img_path=img_path,
db_path=db_path,
model_name=model_name,
distance_metric=distance_metric,
enforce_detection=enforce_detection,
detector_backend=detector_backend,
align=align,
normalization=normalization,
silent=silent,
)
def represent(
img_path: Union[str, np.ndarray],
model_name: str = "VGG-Face",
enforce_detection: bool = True,
detector_backend: str = "opencv",
align: bool = True,
normalization: str = "base",
) -> List[Dict[str, Any]]:
"""
This function represents facial images as vectors. The function uses convolutional neural
networks models to generate vector embeddings.
Parameters:
img_path (string): exact image path. Alternatively, numpy array (BGR) or based64
encoded images could be passed. Source image can have many faces. Then, result will
be the size of number of faces appearing in the source image.
model_name (string): VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID, Dlib,
ArcFace, SFace
enforce_detection (boolean): If no face could not be detected in an image, then this
function will return exception by default. Set this to False not to have this exception.
This might be convenient for low resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8. A special value `skip` could be used to skip face-detection
and only encode the given image.
align (boolean): alignment according to the eye positions.
normalization (string): normalize the input image before feeding to model
Returns:
Represent function returns a list of object, each object has fields as follows:
{
// Multidimensional vector
// The number of dimensions is changing based on the reference model.
// E.g. FaceNet returns 128 dimensional vector;
// VGG-Face returns 2622 dimensional vector.
"embedding": np.array,
// Detected Facial-Area by Face detection in dict format.
// (x, y) is left-corner point, and (w, h) is the width and height
// If `detector_backend` == `skip`, it is the full image area and nonsense.
"facial_area": dict{"x": int, "y": int, "w": int, "h": int},
// Face detection confidence.
// If `detector_backend` == `skip`, will be 0 and nonsense.
"face_confidence": float
}
"""
return representation.represent(
img_path=img_path,
model_name=model_name,
enforce_detection=enforce_detection,
detector_backend=detector_backend,
align=align,
normalization=normalization,
)
def stream(
db_path: str = "",
model_name: str = "VGG-Face",
detector_backend: str = "opencv",
distance_metric: str = "cosine",
enable_face_analysis: bool = True,
source: Any = 0,
time_threshold: int = 5,
frame_threshold: int = 5,
) -> None:
"""
This function applies real time face recognition and facial attribute analysis
Parameters:
db_path (string): facial database path. You should store some .jpg files in this folder.
model_name (string): VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID, Dlib,
ArcFace, SFace
detector_backend (string): opencv, retinaface, mtcnn, ssd, dlib, mediapipe or yolov8.
distance_metric (string): cosine, euclidean, euclidean_l2
enable_facial_analysis (boolean): Set this to False to just run face recognition
source: Set this to 0 for access web cam. Otherwise, pass exact video path.
time_threshold (int): how many second analyzed image will be displayed
frame_threshold (int): how many frames required to focus on face
"""
time_threshold = max(time_threshold, 1)
frame_threshold = max(frame_threshold, 1)
realtime.analysis(
db_path=db_path,
model_name=model_name,
detector_backend=detector_backend,
distance_metric=distance_metric,
enable_face_analysis=enable_face_analysis,
source=source,
time_threshold=time_threshold,
frame_threshold=frame_threshold,
)
def extract_faces(
img_path: Union[str, np.ndarray],
target_size: Tuple[int, int] = (224, 224),
detector_backend: str = "opencv",
enforce_detection: bool = True,
align: bool = True,
grayscale: bool = False,
) -> List[Dict[str, Any]]:
"""
This function applies pre-processing stages of a face recognition pipeline
including detection and alignment
Parameters:
img_path: exact image path, numpy array (BGR) or base64 encoded image.
Source image can have many face. Then, result will be the size of number
of faces appearing in that source image.
target_size (tuple): final shape of facial image. black pixels will be
added to resize the image.
detector_backend (string): face detection backends are retinaface, mtcnn,
opencv, ssd or dlib
enforce_detection (boolean): function throws exception if face cannot be
detected in the fed image. Set this to False if you do not want to get
an exception and run the function anyway.
align (boolean): alignment according to the eye positions.
grayscale (boolean): extracting faces in rgb or gray scale
Returns:
list of dictionaries. Each dictionary will have facial image itself,
extracted area from the original image and confidence score.
"""
return detection.extract_faces(
img_path=img_path,
target_size=target_size,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
grayscale=grayscale,
)
# ---------------------------
# deprecated functions
@deprecated(version="0.0.78", reason="Use DeepFace.extract_faces instead of DeepFace.detectFace")
def detectFace(
img_path: Union[str, np.ndarray],
target_size: tuple = (224, 224),
detector_backend: str = "opencv",
enforce_detection: bool = True,
align: bool = True,
) -> Union[np.ndarray, None]:
"""
Deprecated function. Use extract_faces for same functionality.
This function applies pre-processing stages of a face recognition pipeline
including detection and alignment
Parameters:
img_path: exact image path, numpy array (BGR) or base64 encoded image.
Source image can have many face. Then, result will be the size of number
of faces appearing in that source image.
target_size (tuple): final shape of facial image. black pixels will be
added to resize the image.
detector_backend (string): face detection backends are retinaface, mtcnn,
opencv, ssd or dlib
enforce_detection (boolean): function throws exception if face cannot be
detected in the fed image. Set this to False if you do not want to get
an exception and run the function anyway.
align (boolean): alignment according to the eye positions.
grayscale (boolean): extracting faces in rgb or gray scale
Returns:
detected and aligned face as numpy array
"""
logger.warn("Function detectFace is deprecated. Use extract_faces instead.")
face_objs = extract_faces(
img_path=img_path,
target_size=target_size,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
grayscale=False,
)
extracted_face = None
if len(face_objs) > 0:
extracted_face = face_objs[0]["face"]
return extracted_face
# ---------------------------
# main
functions.initialize_folder()
def cli() -> None:
"""
command line interface function will be offered in this block
"""
import fire
fire.Fire()

View File

@ -0,0 +1,157 @@
import os
import gdown
import tensorflow as tf
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.ArcFace")
# pylint: disable=unsubscriptable-object
# --------------------------------
# dependency configuration
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model
from keras.engine import training
from keras.layers import (
ZeroPadding2D,
Input,
Conv2D,
BatchNormalization,
PReLU,
Add,
Dropout,
Flatten,
Dense,
)
else:
from tensorflow.keras.models import Model
from tensorflow.python.keras.engine import training
from tensorflow.keras.layers import (
ZeroPadding2D,
Input,
Conv2D,
BatchNormalization,
PReLU,
Add,
Dropout,
Flatten,
Dense,
)
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/arcface_weights.h5",
) -> Model:
base_model = ResNet34()
inputs = base_model.inputs[0]
arcface_model = base_model.outputs[0]
arcface_model = BatchNormalization(momentum=0.9, epsilon=2e-5)(arcface_model)
arcface_model = Dropout(0.4)(arcface_model)
arcface_model = Flatten()(arcface_model)
arcface_model = Dense(512, activation=None, use_bias=True, kernel_initializer="glorot_normal")(
arcface_model
)
embedding = BatchNormalization(momentum=0.9, epsilon=2e-5, name="embedding", scale=True)(
arcface_model
)
model = Model(inputs, embedding, name=base_model.name)
# ---------------------------------------
# check the availability of pre-trained weights
home = functions.get_deepface_home()
file_name = "arcface_weights.h5"
output = home + "/.deepface/weights/" + file_name
if os.path.isfile(output) != True:
logger.info(f"{file_name} will be downloaded to {output}")
gdown.download(url, output, quiet=False)
# ---------------------------------------
model.load_weights(output)
return model
def ResNet34() -> Model:
img_input = Input(shape=(112, 112, 3))
x = ZeroPadding2D(padding=1, name="conv1_pad")(img_input)
x = Conv2D(
64, 3, strides=1, use_bias=False, kernel_initializer="glorot_normal", name="conv1_conv"
)(x)
x = BatchNormalization(axis=3, epsilon=2e-5, momentum=0.9, name="conv1_bn")(x)
x = PReLU(shared_axes=[1, 2], name="conv1_prelu")(x)
x = stack_fn(x)
model = training.Model(img_input, x, name="ResNet34")
return model
def block1(x, filters, kernel_size=3, stride=1, conv_shortcut=True, name=None):
bn_axis = 3
if conv_shortcut:
shortcut = Conv2D(
filters,
1,
strides=stride,
use_bias=False,
kernel_initializer="glorot_normal",
name=name + "_0_conv",
)(x)
shortcut = BatchNormalization(
axis=bn_axis, epsilon=2e-5, momentum=0.9, name=name + "_0_bn"
)(shortcut)
else:
shortcut = x
x = BatchNormalization(axis=bn_axis, epsilon=2e-5, momentum=0.9, name=name + "_1_bn")(x)
x = ZeroPadding2D(padding=1, name=name + "_1_pad")(x)
x = Conv2D(
filters,
3,
strides=1,
kernel_initializer="glorot_normal",
use_bias=False,
name=name + "_1_conv",
)(x)
x = BatchNormalization(axis=bn_axis, epsilon=2e-5, momentum=0.9, name=name + "_2_bn")(x)
x = PReLU(shared_axes=[1, 2], name=name + "_1_prelu")(x)
x = ZeroPadding2D(padding=1, name=name + "_2_pad")(x)
x = Conv2D(
filters,
kernel_size,
strides=stride,
kernel_initializer="glorot_normal",
use_bias=False,
name=name + "_2_conv",
)(x)
x = BatchNormalization(axis=bn_axis, epsilon=2e-5, momentum=0.9, name=name + "_3_bn")(x)
x = Add(name=name + "_add")([shortcut, x])
return x
def stack1(x, filters, blocks, stride1=2, name=None):
x = block1(x, filters, stride=stride1, name=name + "_block1")
for i in range(2, blocks + 1):
x = block1(x, filters, conv_shortcut=False, name=name + "_block" + str(i))
return x
def stack_fn(x):
x = stack1(x, 64, 3, name="conv2")
x = stack1(x, 128, 4, name="conv3")
x = stack1(x, 256, 6, name="conv4")
return stack1(x, 512, 3, name="conv5")

View File

@ -0,0 +1,84 @@
import os
import gdown
import tensorflow as tf
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.DeepID")
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model
from keras.layers import (
Conv2D,
Activation,
Input,
Add,
MaxPooling2D,
Flatten,
Dense,
Dropout,
)
else:
from tensorflow.keras.models import Model
from tensorflow.keras.layers import (
Conv2D,
Activation,
Input,
Add,
MaxPooling2D,
Flatten,
Dense,
Dropout,
)
# pylint: disable=line-too-long
# -------------------------------------
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/deepid_keras_weights.h5",
) -> Model:
myInput = Input(shape=(55, 47, 3))
x = Conv2D(20, (4, 4), name="Conv1", activation="relu", input_shape=(55, 47, 3))(myInput)
x = MaxPooling2D(pool_size=2, strides=2, name="Pool1")(x)
x = Dropout(rate=0.99, name="D1")(x)
x = Conv2D(40, (3, 3), name="Conv2", activation="relu")(x)
x = MaxPooling2D(pool_size=2, strides=2, name="Pool2")(x)
x = Dropout(rate=0.99, name="D2")(x)
x = Conv2D(60, (3, 3), name="Conv3", activation="relu")(x)
x = MaxPooling2D(pool_size=2, strides=2, name="Pool3")(x)
x = Dropout(rate=0.99, name="D3")(x)
x1 = Flatten()(x)
fc11 = Dense(160, name="fc11")(x1)
x2 = Conv2D(80, (2, 2), name="Conv4", activation="relu")(x)
x2 = Flatten()(x2)
fc12 = Dense(160, name="fc12")(x2)
y = Add()([fc11, fc12])
y = Activation("relu", name="deepid")(y)
model = Model(inputs=[myInput], outputs=y)
# ---------------------------------
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/deepid_keras_weights.h5") != True:
logger.info("deepid_keras_weights.h5 will be downloaded...")
output = home + "/.deepface/weights/deepid_keras_weights.h5"
gdown.download(url, output, quiet=False)
model.load_weights(home + "/.deepface/weights/deepid_keras_weights.h5")
return model

View File

@ -0,0 +1,86 @@
import os
import bz2
import gdown
import numpy as np
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.DlibResNet")
# pylint: disable=too-few-public-methods
class DlibResNet:
def __init__(self):
## this is not a must dependency. do not import it in the global level.
try:
import dlib
except ModuleNotFoundError as e:
raise ImportError(
"Dlib is an optional dependency, ensure the library is installed."
"Please install using 'pip install dlib' "
) from e
self.layers = [DlibMetaData()]
# ---------------------
home = functions.get_deepface_home()
weight_file = home + "/.deepface/weights/dlib_face_recognition_resnet_model_v1.dat"
# ---------------------
# download pre-trained model if it does not exist
if os.path.isfile(weight_file) != True:
logger.info("dlib_face_recognition_resnet_model_v1.dat is going to be downloaded")
file_name = "dlib_face_recognition_resnet_model_v1.dat.bz2"
url = f"http://dlib.net/files/{file_name}"
output = f"{home}/.deepface/weights/{file_name}"
gdown.download(url, output, quiet=False)
zipfile = bz2.BZ2File(output)
data = zipfile.read()
newfilepath = output[:-4] # discard .bz2 extension
with open(newfilepath, "wb") as f:
f.write(data)
# ---------------------
model = dlib.face_recognition_model_v1(weight_file)
self.__model = model
# ---------------------
# return None # classes must return None
def predict(self, img_aligned: np.ndarray) -> np.ndarray:
# functions.detectFace returns 4 dimensional images
if len(img_aligned.shape) == 4:
img_aligned = img_aligned[0]
# functions.detectFace returns bgr images
img_aligned = img_aligned[:, :, ::-1] # bgr to rgb
# deepface.detectFace returns an array in scale of [0, 1]
# but dlib expects in scale of [0, 255]
if img_aligned.max() <= 1:
img_aligned = img_aligned * 255
img_aligned = img_aligned.astype(np.uint8)
model = self.__model
img_representation = model.compute_face_descriptor(img_aligned)
img_representation = np.array(img_representation)
img_representation = np.expand_dims(img_representation, axis=0)
return img_representation
class DlibMetaData:
def __init__(self):
self.input_shape = [[1, 150, 150, 3]]

View File

@ -0,0 +1,6 @@
from typing import Any
from deepface.basemodels.DlibResNet import DlibResNet
def loadModel() -> Any:
return DlibResNet()

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,40 @@
import os
import gdown
import tensorflow as tf
from deepface.basemodels import Facenet
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.Facenet512")
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model
else:
from tensorflow.keras.models import Model
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/facenet512_weights.h5",
) -> Model:
model = Facenet.InceptionResNetV2(dimension=512)
# -------------------------
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/facenet512_weights.h5") != True:
logger.info("facenet512_weights.h5 will be downloaded...")
output = home + "/.deepface/weights/facenet512_weights.h5"
gdown.download(url, output, quiet=False)
# -------------------------
model.load_weights(home + "/.deepface/weights/facenet512_weights.h5")
# -------------------------
return model

View File

@ -0,0 +1,78 @@
import os
import zipfile
import gdown
import tensorflow as tf
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.FbDeepFace")
# --------------------------------
# dependency configuration
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model, Sequential
from keras.layers import (
Convolution2D,
LocallyConnected2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
)
else:
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import (
Convolution2D,
LocallyConnected2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
)
# -------------------------------------
# pylint: disable=line-too-long
def loadModel(
url="https://github.com/swghosh/DeepFace/releases/download/weights-vggface2-2d-aligned/VGGFace2_DeepFace_weights_val-0.9034.h5.zip",
) -> Model:
base_model = Sequential()
base_model.add(
Convolution2D(32, (11, 11), activation="relu", name="C1", input_shape=(152, 152, 3))
)
base_model.add(MaxPooling2D(pool_size=3, strides=2, padding="same", name="M2"))
base_model.add(Convolution2D(16, (9, 9), activation="relu", name="C3"))
base_model.add(LocallyConnected2D(16, (9, 9), activation="relu", name="L4"))
base_model.add(LocallyConnected2D(16, (7, 7), strides=2, activation="relu", name="L5"))
base_model.add(LocallyConnected2D(16, (5, 5), activation="relu", name="L6"))
base_model.add(Flatten(name="F0"))
base_model.add(Dense(4096, activation="relu", name="F7"))
base_model.add(Dropout(rate=0.5, name="D0"))
base_model.add(Dense(8631, activation="softmax", name="F8"))
# ---------------------------------
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/VGGFace2_DeepFace_weights_val-0.9034.h5") != True:
logger.info("VGGFace2_DeepFace_weights_val-0.9034.h5 will be downloaded...")
output = home + "/.deepface/weights/VGGFace2_DeepFace_weights_val-0.9034.h5.zip"
gdown.download(url, output, quiet=False)
# unzip VGGFace2_DeepFace_weights_val-0.9034.h5.zip
with zipfile.ZipFile(output, "r") as zip_ref:
zip_ref.extractall(home + "/.deepface/weights/")
base_model.load_weights(home + "/.deepface/weights/VGGFace2_DeepFace_weights_val-0.9034.h5")
# drop F8 and D0. F7 is the representation layer.
deepface_model = Model(inputs=base_model.layers[0].input, outputs=base_model.layers[-3].output)
return deepface_model

View File

@ -0,0 +1,379 @@
import os
import gdown
import tensorflow as tf
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.OpenFace")
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model
from keras.layers import Conv2D, ZeroPadding2D, Input, concatenate
from keras.layers import Dense, Activation, Lambda, Flatten, BatchNormalization
from keras.layers import MaxPooling2D, AveragePooling2D
from keras import backend as K
else:
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, ZeroPadding2D, Input, concatenate
from tensorflow.keras.layers import Dense, Activation, Lambda, Flatten, BatchNormalization
from tensorflow.keras.layers import MaxPooling2D, AveragePooling2D
from tensorflow.keras import backend as K
# pylint: disable=unnecessary-lambda
# ---------------------------------------
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/openface_weights.h5",
) -> Model:
myInput = Input(shape=(96, 96, 3))
x = ZeroPadding2D(padding=(3, 3), input_shape=(96, 96, 3))(myInput)
x = Conv2D(64, (7, 7), strides=(2, 2), name="conv1")(x)
x = BatchNormalization(axis=3, epsilon=0.00001, name="bn1")(x)
x = Activation("relu")(x)
x = ZeroPadding2D(padding=(1, 1))(x)
x = MaxPooling2D(pool_size=3, strides=2)(x)
x = Lambda(lambda x: tf.nn.lrn(x, alpha=1e-4, beta=0.75), name="lrn_1")(x)
x = Conv2D(64, (1, 1), name="conv2")(x)
x = BatchNormalization(axis=3, epsilon=0.00001, name="bn2")(x)
x = Activation("relu")(x)
x = ZeroPadding2D(padding=(1, 1))(x)
x = Conv2D(192, (3, 3), name="conv3")(x)
x = BatchNormalization(axis=3, epsilon=0.00001, name="bn3")(x)
x = Activation("relu")(x)
x = Lambda(lambda x: tf.nn.lrn(x, alpha=1e-4, beta=0.75), name="lrn_2")(x) # x is equal added
x = ZeroPadding2D(padding=(1, 1))(x)
x = MaxPooling2D(pool_size=3, strides=2)(x)
# Inception3a
inception_3a_3x3 = Conv2D(96, (1, 1), name="inception_3a_3x3_conv1")(x)
inception_3a_3x3 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3a_3x3_bn1")(
inception_3a_3x3
)
inception_3a_3x3 = Activation("relu")(inception_3a_3x3)
inception_3a_3x3 = ZeroPadding2D(padding=(1, 1))(inception_3a_3x3)
inception_3a_3x3 = Conv2D(128, (3, 3), name="inception_3a_3x3_conv2")(inception_3a_3x3)
inception_3a_3x3 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3a_3x3_bn2")(
inception_3a_3x3
)
inception_3a_3x3 = Activation("relu")(inception_3a_3x3)
inception_3a_5x5 = Conv2D(16, (1, 1), name="inception_3a_5x5_conv1")(x)
inception_3a_5x5 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3a_5x5_bn1")(
inception_3a_5x5
)
inception_3a_5x5 = Activation("relu")(inception_3a_5x5)
inception_3a_5x5 = ZeroPadding2D(padding=(2, 2))(inception_3a_5x5)
inception_3a_5x5 = Conv2D(32, (5, 5), name="inception_3a_5x5_conv2")(inception_3a_5x5)
inception_3a_5x5 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3a_5x5_bn2")(
inception_3a_5x5
)
inception_3a_5x5 = Activation("relu")(inception_3a_5x5)
inception_3a_pool = MaxPooling2D(pool_size=3, strides=2)(x)
inception_3a_pool = Conv2D(32, (1, 1), name="inception_3a_pool_conv")(inception_3a_pool)
inception_3a_pool = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3a_pool_bn")(
inception_3a_pool
)
inception_3a_pool = Activation("relu")(inception_3a_pool)
inception_3a_pool = ZeroPadding2D(padding=((3, 4), (3, 4)))(inception_3a_pool)
inception_3a_1x1 = Conv2D(64, (1, 1), name="inception_3a_1x1_conv")(x)
inception_3a_1x1 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3a_1x1_bn")(
inception_3a_1x1
)
inception_3a_1x1 = Activation("relu")(inception_3a_1x1)
inception_3a = concatenate(
[inception_3a_3x3, inception_3a_5x5, inception_3a_pool, inception_3a_1x1], axis=3
)
# Inception3b
inception_3b_3x3 = Conv2D(96, (1, 1), name="inception_3b_3x3_conv1")(inception_3a)
inception_3b_3x3 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3b_3x3_bn1")(
inception_3b_3x3
)
inception_3b_3x3 = Activation("relu")(inception_3b_3x3)
inception_3b_3x3 = ZeroPadding2D(padding=(1, 1))(inception_3b_3x3)
inception_3b_3x3 = Conv2D(128, (3, 3), name="inception_3b_3x3_conv2")(inception_3b_3x3)
inception_3b_3x3 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3b_3x3_bn2")(
inception_3b_3x3
)
inception_3b_3x3 = Activation("relu")(inception_3b_3x3)
inception_3b_5x5 = Conv2D(32, (1, 1), name="inception_3b_5x5_conv1")(inception_3a)
inception_3b_5x5 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3b_5x5_bn1")(
inception_3b_5x5
)
inception_3b_5x5 = Activation("relu")(inception_3b_5x5)
inception_3b_5x5 = ZeroPadding2D(padding=(2, 2))(inception_3b_5x5)
inception_3b_5x5 = Conv2D(64, (5, 5), name="inception_3b_5x5_conv2")(inception_3b_5x5)
inception_3b_5x5 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3b_5x5_bn2")(
inception_3b_5x5
)
inception_3b_5x5 = Activation("relu")(inception_3b_5x5)
inception_3b_pool = Lambda(lambda x: x**2, name="power2_3b")(inception_3a)
inception_3b_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3))(inception_3b_pool)
inception_3b_pool = Lambda(lambda x: x * 9, name="mult9_3b")(inception_3b_pool)
inception_3b_pool = Lambda(lambda x: K.sqrt(x), name="sqrt_3b")(inception_3b_pool)
inception_3b_pool = Conv2D(64, (1, 1), name="inception_3b_pool_conv")(inception_3b_pool)
inception_3b_pool = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3b_pool_bn")(
inception_3b_pool
)
inception_3b_pool = Activation("relu")(inception_3b_pool)
inception_3b_pool = ZeroPadding2D(padding=(4, 4))(inception_3b_pool)
inception_3b_1x1 = Conv2D(64, (1, 1), name="inception_3b_1x1_conv")(inception_3a)
inception_3b_1x1 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3b_1x1_bn")(
inception_3b_1x1
)
inception_3b_1x1 = Activation("relu")(inception_3b_1x1)
inception_3b = concatenate(
[inception_3b_3x3, inception_3b_5x5, inception_3b_pool, inception_3b_1x1], axis=3
)
# Inception3c
inception_3c_3x3 = Conv2D(128, (1, 1), strides=(1, 1), name="inception_3c_3x3_conv1")(
inception_3b
)
inception_3c_3x3 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3c_3x3_bn1")(
inception_3c_3x3
)
inception_3c_3x3 = Activation("relu")(inception_3c_3x3)
inception_3c_3x3 = ZeroPadding2D(padding=(1, 1))(inception_3c_3x3)
inception_3c_3x3 = Conv2D(256, (3, 3), strides=(2, 2), name="inception_3c_3x3_conv" + "2")(
inception_3c_3x3
)
inception_3c_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_3c_3x3_bn" + "2"
)(inception_3c_3x3)
inception_3c_3x3 = Activation("relu")(inception_3c_3x3)
inception_3c_5x5 = Conv2D(32, (1, 1), strides=(1, 1), name="inception_3c_5x5_conv1")(
inception_3b
)
inception_3c_5x5 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_3c_5x5_bn1")(
inception_3c_5x5
)
inception_3c_5x5 = Activation("relu")(inception_3c_5x5)
inception_3c_5x5 = ZeroPadding2D(padding=(2, 2))(inception_3c_5x5)
inception_3c_5x5 = Conv2D(64, (5, 5), strides=(2, 2), name="inception_3c_5x5_conv" + "2")(
inception_3c_5x5
)
inception_3c_5x5 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_3c_5x5_bn" + "2"
)(inception_3c_5x5)
inception_3c_5x5 = Activation("relu")(inception_3c_5x5)
inception_3c_pool = MaxPooling2D(pool_size=3, strides=2)(inception_3b)
inception_3c_pool = ZeroPadding2D(padding=((0, 1), (0, 1)))(inception_3c_pool)
inception_3c = concatenate([inception_3c_3x3, inception_3c_5x5, inception_3c_pool], axis=3)
# inception 4a
inception_4a_3x3 = Conv2D(96, (1, 1), strides=(1, 1), name="inception_4a_3x3_conv" + "1")(
inception_3c
)
inception_4a_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4a_3x3_bn" + "1"
)(inception_4a_3x3)
inception_4a_3x3 = Activation("relu")(inception_4a_3x3)
inception_4a_3x3 = ZeroPadding2D(padding=(1, 1))(inception_4a_3x3)
inception_4a_3x3 = Conv2D(192, (3, 3), strides=(1, 1), name="inception_4a_3x3_conv" + "2")(
inception_4a_3x3
)
inception_4a_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4a_3x3_bn" + "2"
)(inception_4a_3x3)
inception_4a_3x3 = Activation("relu")(inception_4a_3x3)
inception_4a_5x5 = Conv2D(32, (1, 1), strides=(1, 1), name="inception_4a_5x5_conv1")(
inception_3c
)
inception_4a_5x5 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_4a_5x5_bn1")(
inception_4a_5x5
)
inception_4a_5x5 = Activation("relu")(inception_4a_5x5)
inception_4a_5x5 = ZeroPadding2D(padding=(2, 2))(inception_4a_5x5)
inception_4a_5x5 = Conv2D(64, (5, 5), strides=(1, 1), name="inception_4a_5x5_conv" + "2")(
inception_4a_5x5
)
inception_4a_5x5 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4a_5x5_bn" + "2"
)(inception_4a_5x5)
inception_4a_5x5 = Activation("relu")(inception_4a_5x5)
inception_4a_pool = Lambda(lambda x: x**2, name="power2_4a")(inception_3c)
inception_4a_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3))(inception_4a_pool)
inception_4a_pool = Lambda(lambda x: x * 9, name="mult9_4a")(inception_4a_pool)
inception_4a_pool = Lambda(lambda x: K.sqrt(x), name="sqrt_4a")(inception_4a_pool)
inception_4a_pool = Conv2D(128, (1, 1), strides=(1, 1), name="inception_4a_pool_conv" + "")(
inception_4a_pool
)
inception_4a_pool = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4a_pool_bn" + ""
)(inception_4a_pool)
inception_4a_pool = Activation("relu")(inception_4a_pool)
inception_4a_pool = ZeroPadding2D(padding=(2, 2))(inception_4a_pool)
inception_4a_1x1 = Conv2D(256, (1, 1), strides=(1, 1), name="inception_4a_1x1_conv" + "")(
inception_3c
)
inception_4a_1x1 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_4a_1x1_bn" + "")(
inception_4a_1x1
)
inception_4a_1x1 = Activation("relu")(inception_4a_1x1)
inception_4a = concatenate(
[inception_4a_3x3, inception_4a_5x5, inception_4a_pool, inception_4a_1x1], axis=3
)
# inception4e
inception_4e_3x3 = Conv2D(160, (1, 1), strides=(1, 1), name="inception_4e_3x3_conv" + "1")(
inception_4a
)
inception_4e_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4e_3x3_bn" + "1"
)(inception_4e_3x3)
inception_4e_3x3 = Activation("relu")(inception_4e_3x3)
inception_4e_3x3 = ZeroPadding2D(padding=(1, 1))(inception_4e_3x3)
inception_4e_3x3 = Conv2D(256, (3, 3), strides=(2, 2), name="inception_4e_3x3_conv" + "2")(
inception_4e_3x3
)
inception_4e_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4e_3x3_bn" + "2"
)(inception_4e_3x3)
inception_4e_3x3 = Activation("relu")(inception_4e_3x3)
inception_4e_5x5 = Conv2D(64, (1, 1), strides=(1, 1), name="inception_4e_5x5_conv" + "1")(
inception_4a
)
inception_4e_5x5 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4e_5x5_bn" + "1"
)(inception_4e_5x5)
inception_4e_5x5 = Activation("relu")(inception_4e_5x5)
inception_4e_5x5 = ZeroPadding2D(padding=(2, 2))(inception_4e_5x5)
inception_4e_5x5 = Conv2D(128, (5, 5), strides=(2, 2), name="inception_4e_5x5_conv" + "2")(
inception_4e_5x5
)
inception_4e_5x5 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_4e_5x5_bn" + "2"
)(inception_4e_5x5)
inception_4e_5x5 = Activation("relu")(inception_4e_5x5)
inception_4e_pool = MaxPooling2D(pool_size=3, strides=2)(inception_4a)
inception_4e_pool = ZeroPadding2D(padding=((0, 1), (0, 1)))(inception_4e_pool)
inception_4e = concatenate([inception_4e_3x3, inception_4e_5x5, inception_4e_pool], axis=3)
# inception5a
inception_5a_3x3 = Conv2D(96, (1, 1), strides=(1, 1), name="inception_5a_3x3_conv" + "1")(
inception_4e
)
inception_5a_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_5a_3x3_bn" + "1"
)(inception_5a_3x3)
inception_5a_3x3 = Activation("relu")(inception_5a_3x3)
inception_5a_3x3 = ZeroPadding2D(padding=(1, 1))(inception_5a_3x3)
inception_5a_3x3 = Conv2D(384, (3, 3), strides=(1, 1), name="inception_5a_3x3_conv" + "2")(
inception_5a_3x3
)
inception_5a_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_5a_3x3_bn" + "2"
)(inception_5a_3x3)
inception_5a_3x3 = Activation("relu")(inception_5a_3x3)
inception_5a_pool = Lambda(lambda x: x**2, name="power2_5a")(inception_4e)
inception_5a_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3))(inception_5a_pool)
inception_5a_pool = Lambda(lambda x: x * 9, name="mult9_5a")(inception_5a_pool)
inception_5a_pool = Lambda(lambda x: K.sqrt(x), name="sqrt_5a")(inception_5a_pool)
inception_5a_pool = Conv2D(96, (1, 1), strides=(1, 1), name="inception_5a_pool_conv" + "")(
inception_5a_pool
)
inception_5a_pool = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_5a_pool_bn" + ""
)(inception_5a_pool)
inception_5a_pool = Activation("relu")(inception_5a_pool)
inception_5a_pool = ZeroPadding2D(padding=(1, 1))(inception_5a_pool)
inception_5a_1x1 = Conv2D(256, (1, 1), strides=(1, 1), name="inception_5a_1x1_conv" + "")(
inception_4e
)
inception_5a_1x1 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_5a_1x1_bn" + "")(
inception_5a_1x1
)
inception_5a_1x1 = Activation("relu")(inception_5a_1x1)
inception_5a = concatenate([inception_5a_3x3, inception_5a_pool, inception_5a_1x1], axis=3)
# inception_5b
inception_5b_3x3 = Conv2D(96, (1, 1), strides=(1, 1), name="inception_5b_3x3_conv" + "1")(
inception_5a
)
inception_5b_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_5b_3x3_bn" + "1"
)(inception_5b_3x3)
inception_5b_3x3 = Activation("relu")(inception_5b_3x3)
inception_5b_3x3 = ZeroPadding2D(padding=(1, 1))(inception_5b_3x3)
inception_5b_3x3 = Conv2D(384, (3, 3), strides=(1, 1), name="inception_5b_3x3_conv" + "2")(
inception_5b_3x3
)
inception_5b_3x3 = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_5b_3x3_bn" + "2"
)(inception_5b_3x3)
inception_5b_3x3 = Activation("relu")(inception_5b_3x3)
inception_5b_pool = MaxPooling2D(pool_size=3, strides=2)(inception_5a)
inception_5b_pool = Conv2D(96, (1, 1), strides=(1, 1), name="inception_5b_pool_conv" + "")(
inception_5b_pool
)
inception_5b_pool = BatchNormalization(
axis=3, epsilon=0.00001, name="inception_5b_pool_bn" + ""
)(inception_5b_pool)
inception_5b_pool = Activation("relu")(inception_5b_pool)
inception_5b_pool = ZeroPadding2D(padding=(1, 1))(inception_5b_pool)
inception_5b_1x1 = Conv2D(256, (1, 1), strides=(1, 1), name="inception_5b_1x1_conv" + "")(
inception_5a
)
inception_5b_1x1 = BatchNormalization(axis=3, epsilon=0.00001, name="inception_5b_1x1_bn" + "")(
inception_5b_1x1
)
inception_5b_1x1 = Activation("relu")(inception_5b_1x1)
inception_5b = concatenate([inception_5b_3x3, inception_5b_pool, inception_5b_1x1], axis=3)
av_pool = AveragePooling2D(pool_size=(3, 3), strides=(1, 1))(inception_5b)
reshape_layer = Flatten()(av_pool)
dense_layer = Dense(128, name="dense_layer")(reshape_layer)
norm_layer = Lambda(lambda x: K.l2_normalize(x, axis=1), name="norm_layer")(dense_layer)
# Final Model
model = Model(inputs=[myInput], outputs=norm_layer)
# -----------------------------------
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/openface_weights.h5") != True:
logger.info("openface_weights.h5 will be downloaded...")
output = home + "/.deepface/weights/openface_weights.h5"
gdown.download(url, output, quiet=False)
# -----------------------------------
model.load_weights(home + "/.deepface/weights/openface_weights.h5")
# -----------------------------------
return model

View File

@ -0,0 +1,65 @@
import os
from typing import Any
import numpy as np
import cv2 as cv
import gdown
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.SFace")
# pylint: disable=line-too-long, too-few-public-methods
class _Layer:
input_shape = (None, 112, 112, 3)
output_shape = (None, 1, 128)
class SFaceModel:
def __init__(self, model_path):
try:
self.model = cv.FaceRecognizerSF.create(
model=model_path, config="", backend_id=0, target_id=0
)
except Exception as err:
raise ValueError(
"Exception while calling opencv.FaceRecognizerSF module."
+ "This is an optional dependency."
+ "You can install it as pip install opencv-contrib-python."
) from err
self.layers = [_Layer()]
def predict(self, image: np.ndarray) -> np.ndarray:
# Preprocess
input_blob = (image[0] * 255).astype(
np.uint8
) # revert the image to original format and preprocess using the model
# Forward
embeddings = self.model.feature(input_blob)
return embeddings
def load_model(
url="https://github.com/opencv/opencv_zoo/raw/main/models/face_recognition_sface/face_recognition_sface_2021dec.onnx",
) -> Any:
home = functions.get_deepface_home()
file_name = home + "/.deepface/weights/face_recognition_sface_2021dec.onnx"
if not os.path.isfile(file_name):
logger.info("sface weights will be downloaded...")
gdown.download(url, file_name, quiet=False)
model = SFaceModel(model_path=file_name)
return model

View File

@ -0,0 +1,119 @@
import os
import gdown
import tensorflow as tf
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="basemodels.VGGFace")
# ---------------------------------------
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model, Sequential
from keras.layers import (
Convolution2D,
ZeroPadding2D,
MaxPooling2D,
Flatten,
Dropout,
Activation,
Lambda,
)
from keras import backend as K
else:
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import (
Convolution2D,
ZeroPadding2D,
MaxPooling2D,
Flatten,
Dropout,
Activation,
Lambda,
)
from tensorflow.keras import backend as K
# ---------------------------------------
def baseModel() -> Sequential:
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=(224, 224, 3)))
model.add(Convolution2D(64, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(64, (3, 3), activation="relu"))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, (3, 3), activation="relu"))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, (3, 3), activation="relu"))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, (3, 3), activation="relu"))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, (3, 3), activation="relu"))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, (3, 3), activation="relu"))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(Convolution2D(4096, (7, 7), activation="relu"))
model.add(Dropout(0.5))
model.add(Convolution2D(4096, (1, 1), activation="relu"))
model.add(Dropout(0.5))
model.add(Convolution2D(2622, (1, 1)))
model.add(Flatten())
model.add(Activation("softmax"))
return model
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/vgg_face_weights.h5",
) -> Model:
model = baseModel()
home = functions.get_deepface_home()
output = home + "/.deepface/weights/vgg_face_weights.h5"
if os.path.isfile(output) != True:
logger.info("vgg_face_weights.h5 will be downloaded...")
gdown.download(url, output, quiet=False)
model.load_weights(output)
# 2622d dimensional model
# vgg_face_descriptor = Model(inputs=model.layers[0].input, outputs=model.layers[-2].output)
# 4096 dimensional model offers 6% to 14% increasement on accuracy!
# - softmax causes underfitting
# - added normalization layer to avoid underfitting with euclidean
# as described here: https://github.com/serengil/deepface/issues/944
base_model_output = Sequential()
base_model_output = Flatten()(model.layers[-5].output)
base_model_output = Lambda(lambda x: K.l2_normalize(x, axis=1), name="norm_layer")(
base_model_output
)
vgg_face_descriptor = Model(inputs=model.input, outputs=base_model_output)
return vgg_face_descriptor

View File

@ -0,0 +1,62 @@
from typing import Union
import numpy as np
def findCosineDistance(
source_representation: Union[np.ndarray, list], test_representation: Union[np.ndarray, list]
) -> np.float64:
if isinstance(source_representation, list):
source_representation = np.array(source_representation)
if isinstance(test_representation, list):
test_representation = np.array(test_representation)
a = np.matmul(np.transpose(source_representation), test_representation)
b = np.sum(np.multiply(source_representation, source_representation))
c = np.sum(np.multiply(test_representation, test_representation))
return 1 - (a / (np.sqrt(b) * np.sqrt(c)))
def findEuclideanDistance(
source_representation: Union[np.ndarray, list], test_representation: Union[np.ndarray, list]
) -> np.float64:
if isinstance(source_representation, list):
source_representation = np.array(source_representation)
if isinstance(test_representation, list):
test_representation = np.array(test_representation)
euclidean_distance = source_representation - test_representation
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
def l2_normalize(x: np.ndarray) -> np.ndarray:
return x / np.sqrt(np.sum(np.multiply(x, x)))
def findThreshold(model_name: str, distance_metric: str) -> float:
base_threshold = {"cosine": 0.40, "euclidean": 0.55, "euclidean_l2": 0.75}
thresholds = {
# "VGG-Face": {"cosine": 0.40, "euclidean": 0.60, "euclidean_l2": 0.86}, # 2622d
"VGG-Face": {
"cosine": 0.68,
"euclidean": 1.17,
"euclidean_l2": 1.17,
}, # 4096d - tuned with LFW
"Facenet": {"cosine": 0.40, "euclidean": 10, "euclidean_l2": 0.80},
"Facenet512": {"cosine": 0.30, "euclidean": 23.56, "euclidean_l2": 1.04},
"ArcFace": {"cosine": 0.68, "euclidean": 4.15, "euclidean_l2": 1.13},
"Dlib": {"cosine": 0.07, "euclidean": 0.6, "euclidean_l2": 0.4},
"SFace": {"cosine": 0.593, "euclidean": 10.734, "euclidean_l2": 1.055},
"OpenFace": {"cosine": 0.10, "euclidean": 0.55, "euclidean_l2": 0.55},
"DeepFace": {"cosine": 0.23, "euclidean": 64, "euclidean_l2": 0.64},
"DeepID": {"cosine": 0.015, "euclidean": 45, "euclidean_l2": 0.17},
}
threshold = thresholds.get(model_name, base_threshold).get(distance_metric, 0.4)
return threshold

View File

@ -0,0 +1,397 @@
import os
from typing import Union, Tuple
import base64
from pathlib import Path
# 3rd party dependencies
from PIL import Image
import requests
import numpy as np
import cv2
import tensorflow as tf
from deprecated import deprecated
# package dependencies
from deepface.detectors import FaceDetector
from deepface.commons.logger import Logger
logger = Logger(module="commons.functions")
# pylint: disable=no-else-raise
# --------------------------------------------------
# configurations of dependencies
tf_version = tf.__version__
tf_major_version = int(tf_version.split(".", maxsplit=1)[0])
tf_minor_version = int(tf_version.split(".")[1])
if tf_major_version == 1:
from keras.preprocessing import image
elif tf_major_version == 2:
from tensorflow.keras.preprocessing import image
# --------------------------------------------------
def initialize_folder() -> None:
"""Initialize the folder for storing weights and models.
Raises:
OSError: if the folder cannot be created.
"""
home = get_deepface_home()
deepFaceHomePath = home + "/.deepface"
weightsPath = deepFaceHomePath + "/weights"
if not os.path.exists(deepFaceHomePath):
os.makedirs(deepFaceHomePath, exist_ok=True)
logger.info(f"Directory {home}/.deepface created")
if not os.path.exists(weightsPath):
os.makedirs(weightsPath, exist_ok=True)
logger.info(f"Directory {home}/.deepface/weights created")
def get_deepface_home() -> str:
"""Get the home directory for storing weights and models.
Returns:
str: the home directory.
"""
return str(os.getenv("DEEPFACE_HOME", default=str(Path.home())))
# --------------------------------------------------
def loadBase64Img(uri: str) -> np.ndarray:
"""Load image from base64 string.
Args:
uri: a base64 string.
Returns:
numpy array: the loaded image.
"""
encoded_data = uri.split(",")[1]
nparr = np.fromstring(base64.b64decode(encoded_data), np.uint8)
img_bgr = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
# img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
return img_bgr
def load_image(img: Union[str, np.ndarray]) -> Tuple[np.ndarray, str]:
"""
Load image from path, url, base64 or numpy array.
Args:
img: a path, url, base64 or numpy array.
Returns:
image (numpy array): the loaded image in BGR format
image name (str): image name itself
"""
# The image is already a numpy array
if isinstance(img, np.ndarray):
return img, "numpy array"
if isinstance(img, Path):
img = str(img)
if not isinstance(img, str):
raise ValueError(f"img must be numpy array or str but it is {type(img)}")
# The image is a base64 string
if img.startswith("data:image/"):
return loadBase64Img(img), "base64 encoded string"
# The image is a url
if img.startswith("http"):
return (
np.array(Image.open(requests.get(img, stream=True, timeout=60).raw).convert("BGR")),
# return url as image name
img,
)
# The image is a path
if os.path.isfile(img) is not True:
raise ValueError(f"Confirm that {img} exists")
# image must be a file on the system then
# image name must have english characters
if img.isascii() is False:
raise ValueError(f"Input image must not have non-english characters - {img}")
img_obj_bgr = cv2.imread(img)
# img_obj_rgb = cv2.cvtColor(img_obj_bgr, cv2.COLOR_BGR2RGB)
return img_obj_bgr, img
# --------------------------------------------------
def extract_faces(
img: Union[str, np.ndarray],
target_size: tuple = (224, 224),
detector_backend: str = "opencv",
grayscale: bool = False,
enforce_detection: bool = True,
align: bool = True,
) -> list:
"""Extract faces from an image.
Args:
img: a path, url, base64 or numpy array.
target_size (tuple, optional): the target size of the extracted faces.
Defaults to (224, 224).
detector_backend (str, optional): the face detector backend. Defaults to "opencv".
grayscale (bool, optional): whether to convert the extracted faces to grayscale.
Defaults to False.
enforce_detection (bool, optional): whether to enforce face detection. Defaults to True.
align (bool, optional): whether to align the extracted faces. Defaults to True.
Raises:
ValueError: if face could not be detected and enforce_detection is True.
Returns:
list: a list of extracted faces.
"""
# this is going to store a list of img itself (numpy), it region and confidence
extracted_faces = []
# img might be path, base64 or numpy array. Convert it to numpy whatever it is.
img, img_name = load_image(img)
img_region = [0, 0, img.shape[1], img.shape[0]]
if detector_backend == "skip":
face_objs = [(img, img_region, 0)]
else:
face_detector = FaceDetector.build_model(detector_backend)
face_objs = FaceDetector.detect_faces(face_detector, detector_backend, img, align)
# in case of no face found
if len(face_objs) == 0 and enforce_detection is True:
if img_name is not None:
raise ValueError(
f"Face could not be detected in {img_name}."
"Please confirm that the picture is a face photo "
"or consider to set enforce_detection param to False."
)
else:
raise ValueError(
"Face could not be detected. Please confirm that the picture is a face photo "
"or consider to set enforce_detection param to False."
)
if len(face_objs) == 0 and enforce_detection is False:
face_objs = [(img, img_region, 0)]
for current_img, current_region, confidence in face_objs:
if current_img.shape[0] > 0 and current_img.shape[1] > 0:
if grayscale is True:
current_img = cv2.cvtColor(current_img, cv2.COLOR_BGR2GRAY)
# resize and padding
factor_0 = target_size[0] / current_img.shape[0]
factor_1 = target_size[1] / current_img.shape[1]
factor = min(factor_0, factor_1)
dsize = (
int(current_img.shape[1] * factor),
int(current_img.shape[0] * factor),
)
current_img = cv2.resize(current_img, dsize)
diff_0 = target_size[0] - current_img.shape[0]
diff_1 = target_size[1] - current_img.shape[1]
if grayscale is False:
# Put the base image in the middle of the padded image
current_img = np.pad(
current_img,
(
(diff_0 // 2, diff_0 - diff_0 // 2),
(diff_1 // 2, diff_1 - diff_1 // 2),
(0, 0),
),
"constant",
)
else:
current_img = np.pad(
current_img,
(
(diff_0 // 2, diff_0 - diff_0 // 2),
(diff_1 // 2, diff_1 - diff_1 // 2),
),
"constant",
)
# double check: if target image is not still the same size with target.
if current_img.shape[0:2] != target_size:
current_img = cv2.resize(current_img, target_size)
# normalizing the image pixels
# what this line doing? must?
img_pixels = image.img_to_array(current_img)
img_pixels = np.expand_dims(img_pixels, axis=0)
img_pixels /= 255 # normalize input in [0, 1]
# int cast is for the exception - object of type 'float32' is not JSON serializable
region_obj = {
"x": int(current_region[0]),
"y": int(current_region[1]),
"w": int(current_region[2]),
"h": int(current_region[3]),
}
extracted_face = [img_pixels, region_obj, confidence]
extracted_faces.append(extracted_face)
if len(extracted_faces) == 0 and enforce_detection == True:
raise ValueError(
f"Detected face shape is {img.shape}. Consider to set enforce_detection arg to False."
)
return extracted_faces
def normalize_input(img: np.ndarray, normalization: str = "base") -> np.ndarray:
"""Normalize input image.
Args:
img (numpy array): the input image.
normalization (str, optional): the normalization technique. Defaults to "base",
for no normalization.
Returns:
numpy array: the normalized image.
"""
# issue 131 declares that some normalization techniques improves the accuracy
if normalization == "base":
return img
# @trevorgribble and @davedgd contributed this feature
# restore input in scale of [0, 255] because it was normalized in scale of
# [0, 1] in preprocess_face
img *= 255
if normalization == "raw":
pass # return just restored pixels
elif normalization == "Facenet":
mean, std = img.mean(), img.std()
img = (img - mean) / std
elif normalization == "Facenet2018":
# simply / 127.5 - 1 (similar to facenet 2018 model preprocessing step as @iamrishab posted)
img /= 127.5
img -= 1
elif normalization == "VGGFace":
# mean subtraction based on VGGFace1 training data
img[..., 0] -= 93.5940
img[..., 1] -= 104.7624
img[..., 2] -= 129.1863
elif normalization == "VGGFace2":
# mean subtraction based on VGGFace2 training data
img[..., 0] -= 91.4953
img[..., 1] -= 103.8827
img[..., 2] -= 131.0912
elif normalization == "ArcFace":
# Reference study: The faces are cropped and resized to 112×112,
# and each pixel (ranged between [0, 255]) in RGB images is normalised
# by subtracting 127.5 then divided by 128.
img -= 127.5
img /= 128
else:
raise ValueError(f"unimplemented normalization type - {normalization}")
return img
def find_target_size(model_name: str) -> tuple:
"""Find the target size of the model.
Args:
model_name (str): the model name.
Returns:
tuple: the target size.
"""
target_sizes = {
"VGG-Face": (224, 224),
"Facenet": (160, 160),
"Facenet512": (160, 160),
"OpenFace": (96, 96),
"DeepFace": (152, 152),
"DeepID": (47, 55),
"Dlib": (150, 150),
"ArcFace": (112, 112),
"SFace": (112, 112),
}
target_size = target_sizes.get(model_name)
if target_size == None:
raise ValueError(f"unimplemented model name - {model_name}")
return target_size
# ---------------------------------------------------
# deprecated functions
@deprecated(version="0.0.78", reason="Use extract_faces instead of preprocess_face")
def preprocess_face(
img: Union[str, np.ndarray],
target_size=(224, 224),
detector_backend="opencv",
grayscale=False,
enforce_detection=True,
align=True,
) -> Union[np.ndarray, None]:
"""
Preprocess only one face
Args:
img (str or numpy): the input image.
target_size (tuple, optional): the target size. Defaults to (224, 224).
detector_backend (str, optional): the detector backend. Defaults to "opencv".
grayscale (bool, optional): whether to convert to grayscale. Defaults to False.
enforce_detection (bool, optional): whether to enforce face detection. Defaults to True.
align (bool, optional): whether to align the face. Defaults to True.
Returns:
loaded image (numpt): the preprocessed face.
Raises:
ValueError: if face is not detected and enforce_detection is True.
Deprecated:
0.0.78: Use extract_faces instead of preprocess_face.
"""
logger.warn("Function preprocess_face is deprecated. Use extract_faces instead.")
result = None
img_objs = extract_faces(
img=img,
target_size=target_size,
detector_backend=detector_backend,
grayscale=grayscale,
enforce_detection=enforce_detection,
align=align,
)
if len(img_objs) > 0:
result, _, _ = img_objs[0]
# discard expanded dimension
if len(result.shape) == 4:
result = result[0]
return result

View File

@ -0,0 +1,41 @@
import os
import logging
from datetime import datetime
# pylint: disable=broad-except
class Logger:
def __init__(self, module=None):
self.module = module
log_level = os.environ.get("DEEPFACE_LOG_LEVEL", str(logging.INFO))
try:
self.log_level = int(log_level)
except Exception as err:
self.dump_log(
f"Exception while parsing $DEEPFACE_LOG_LEVEL."
f"Expected int but it is {log_level} ({str(err)})."
"Setting app log level to info."
)
self.log_level = logging.INFO
def info(self, message):
if self.log_level <= logging.INFO:
self.dump_log(f"{message}")
def debug(self, message):
if self.log_level <= logging.DEBUG:
self.dump_log(f"🕷️ {message}")
def warn(self, message):
if self.log_level <= logging.WARNING:
self.dump_log(f"⚠️ {message}")
def error(self, message):
if self.log_level <= logging.ERROR:
self.dump_log(f"🔴 {message}")
def critical(self, message):
if self.log_level <= logging.CRITICAL:
self.dump_log(f"💥 {message}")
def dump_log(self, message):
print(f"{str(datetime.now())[2:-7]} - {message}")

View File

@ -0,0 +1,108 @@
import os
import bz2
import gdown
import numpy as np
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="detectors.DlibWrapper")
def build_model() -> dict:
"""
Build a dlib hog face detector model
Returns:
model (Any)
"""
home = functions.get_deepface_home()
# this is not a must dependency. do not import it in the global level.
try:
import dlib
except ModuleNotFoundError as e:
raise ImportError(
"Dlib is an optional detector, ensure the library is installed."
"Please install using 'pip install dlib' "
) from e
# check required file exists in the home/.deepface/weights folder
if os.path.isfile(home + "/.deepface/weights/shape_predictor_5_face_landmarks.dat") != True:
file_name = "shape_predictor_5_face_landmarks.dat.bz2"
logger.info(f"{file_name} is going to be downloaded")
url = f"http://dlib.net/files/{file_name}"
output = f"{home}/.deepface/weights/{file_name}"
gdown.download(url, output, quiet=False)
zipfile = bz2.BZ2File(output)
data = zipfile.read()
newfilepath = output[:-4] # discard .bz2 extension
with open(newfilepath, "wb") as f:
f.write(data)
face_detector = dlib.get_frontal_face_detector()
sp = dlib.shape_predictor(home + "/.deepface/weights/shape_predictor_5_face_landmarks.dat")
detector = {}
detector["face_detector"] = face_detector
detector["sp"] = sp
return detector
def detect_face(detector: dict, img: np.ndarray, align: bool = True) -> list:
"""
Detect and align face with dlib
Args:
face_detector (Any): dlib face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
# this is not a must dependency. do not import it in the global level.
try:
import dlib
except ModuleNotFoundError as e:
raise ImportError(
"Dlib is an optional detector, ensure the library is installed."
"Please install using 'pip install dlib' "
) from e
resp = []
sp = detector["sp"]
detected_face = None
img_region = [0, 0, img.shape[1], img.shape[0]]
face_detector = detector["face_detector"]
# note that, by design, dlib's fhog face detector scores are >0 but not capped at 1
detections, scores, _ = face_detector.run(img, 1)
if len(detections) > 0:
for idx, d in enumerate(detections):
left = d.left()
right = d.right()
top = d.top()
bottom = d.bottom()
# detected_face = img[top:bottom, left:right]
detected_face = img[
max(0, top) : min(bottom, img.shape[0]), max(0, left) : min(right, img.shape[1])
]
img_region = [left, top, right - left, bottom - top]
confidence = scores[idx]
if align:
img_shape = sp(img, detections[idx])
detected_face = dlib.get_face_chip(img, img_shape, size=detected_face.shape[0])
resp.append((detected_face, img_region, confidence))
return resp

View File

@ -0,0 +1,149 @@
from typing import Any, Union
from PIL import Image
import numpy as np
from deepface.detectors import (
OpenCvWrapper,
SsdWrapper,
DlibWrapper,
MtcnnWrapper,
RetinaFaceWrapper,
MediapipeWrapper,
YoloWrapper,
YunetWrapper,
FastMtcnnWrapper,
)
def build_model(detector_backend: str) -> Any:
"""
Build a face detector model
Args:
detector_backend (str): backend detector name
Returns:
built detector (Any)
"""
global face_detector_obj # singleton design pattern
backends = {
"opencv": OpenCvWrapper.build_model,
"ssd": SsdWrapper.build_model,
"dlib": DlibWrapper.build_model,
"mtcnn": MtcnnWrapper.build_model,
"retinaface": RetinaFaceWrapper.build_model,
"mediapipe": MediapipeWrapper.build_model,
"yolov8": YoloWrapper.build_model,
"yunet": YunetWrapper.build_model,
"fastmtcnn": FastMtcnnWrapper.build_model,
}
if not "face_detector_obj" in globals():
face_detector_obj = {}
built_models = list(face_detector_obj.keys())
if detector_backend not in built_models:
face_detector = backends.get(detector_backend)
if face_detector:
face_detector = face_detector()
face_detector_obj[detector_backend] = face_detector
else:
raise ValueError("invalid detector_backend passed - " + detector_backend)
return face_detector_obj[detector_backend]
def detect_face(
face_detector: Any, detector_backend: str, img: np.ndarray, align: bool = True
) -> tuple:
"""
Detect a single face from a given image
Args:
face_detector (Any): pre-built face detector object
detector_backend (str): detector name
img (np.ndarray): pre-loaded image
alig (bool): enable or disable alignment after detection
Returns
result (tuple): tuple of face (np.ndarray), face region (list)
, confidence score (float)
"""
obj = detect_faces(face_detector, detector_backend, img, align)
if len(obj) > 0:
face, region, confidence = obj[0] # discard multiple faces
# If no face is detected, set face to None,
# image region to full image, and confidence to 0.
else: # len(obj) == 0
face = None
region = [0, 0, img.shape[1], img.shape[0]]
confidence = 0
return face, region, confidence
def detect_faces(
face_detector: Any, detector_backend: str, img: np.ndarray, align: bool = True
) -> list:
"""
Detect face(s) from a given image
Args:
face_detector (Any): pre-built face detector object
detector_backend (str): detector name
img (np.ndarray): pre-loaded image
alig (bool): enable or disable alignment after detection
Returns
result (list): tuple of face (np.ndarray), face region (list)
, confidence score (float)
"""
backends = {
"opencv": OpenCvWrapper.detect_face,
"ssd": SsdWrapper.detect_face,
"dlib": DlibWrapper.detect_face,
"mtcnn": MtcnnWrapper.detect_face,
"retinaface": RetinaFaceWrapper.detect_face,
"mediapipe": MediapipeWrapper.detect_face,
"yolov8": YoloWrapper.detect_face,
"yunet": YunetWrapper.detect_face,
"fastmtcnn": FastMtcnnWrapper.detect_face,
}
detect_face_fn = backends.get(detector_backend)
if detect_face_fn: # pylint: disable=no-else-return
obj = detect_face_fn(face_detector, img, align)
# obj stores list of (detected_face, region, confidence)
return obj
else:
raise ValueError("invalid detector_backend passed - " + detector_backend)
def get_alignment_angle_arctan2(
left_eye: Union[list, tuple], right_eye: Union[list, tuple]
) -> float:
"""
Find the angle between eyes
Args:
left_eye: coordinates of left eye with respect to the you
right_eye: coordinates of right eye with respect to the you
Returns:
angle (float)
"""
return float(np.degrees(np.arctan2(right_eye[1] - left_eye[1], right_eye[0] - left_eye[0])))
def alignment_procedure(
img: np.ndarray, left_eye: Union[list, tuple], right_eye: Union[list, tuple]
) -> np.ndarray:
"""
Rotate given image until eyes are on a horizontal line
Args:
img (np.ndarray): pre-loaded image
left_eye: coordinates of left eye with respect to the you
right_eye: coordinates of right eye with respect to the you
Returns:
result (np.ndarray): aligned face
"""
angle = get_alignment_angle_arctan2(left_eye, right_eye)
img = Image.fromarray(img)
img = np.array(img.rotate(angle))
return img

View File

@ -0,0 +1,79 @@
from typing import Any, Union
import cv2
import numpy as np
from deepface.detectors import FaceDetector
# Link -> https://github.com/timesler/facenet-pytorch
# Examples https://www.kaggle.com/timesler/guide-to-mtcnn-in-facenet-pytorch
def build_model() -> Any:
"""
Build a fast mtcnn face detector model
Returns:
model (Any)
"""
# this is not a must dependency. do not import it in the global level.
try:
from facenet_pytorch import MTCNN as fast_mtcnn
except ModuleNotFoundError as e:
raise ImportError(
"FastMtcnn is an optional detector, ensure the library is installed."
"Please install using 'pip install facenet-pytorch' "
) from e
face_detector = fast_mtcnn(
image_size=160,
thresholds=[0.6, 0.7, 0.7], # MTCNN thresholds
post_process=True,
device="cpu",
select_largest=False, # return result in descending order
)
return face_detector
def xyxy_to_xywh(xyxy: Union[list, tuple]) -> list:
"""
Convert xyxy format to xywh format.
"""
x, y = xyxy[0], xyxy[1]
w = xyxy[2] - x + 1
h = xyxy[3] - y + 1
return [x, y, w, h]
def detect_face(face_detector: Any, img: np.ndarray, align: bool = True) -> list:
"""
Detect and align face with mtcnn
Args:
face_detector (Any): mtcnn face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
resp = []
detected_face = None
img_region = [0, 0, img.shape[1], img.shape[0]]
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # mtcnn expects RGB but OpenCV read BGR
detections = face_detector.detect(
img_rgb, landmarks=True
) # returns boundingbox, prob, landmark
if len(detections[0]) > 0:
for detection in zip(*detections):
x, y, w, h = xyxy_to_xywh(detection[0])
detected_face = img[int(y) : int(y + h), int(x) : int(x + w)]
img_region = [x, y, w, h]
confidence = detection[1]
if align:
left_eye = detection[2][0]
right_eye = detection[2][1]
detected_face = FaceDetector.alignment_procedure(detected_face, left_eye, right_eye)
resp.append((detected_face, img_region, confidence))
return resp

View File

@ -0,0 +1,78 @@
from typing import Any
import numpy as np
from deepface.detectors import FaceDetector
# Link - https://google.github.io/mediapipe/solutions/face_detection
def build_model() -> Any:
"""
Build a mediapipe face detector model
Returns:
model (Any)
"""
# this is not a must dependency. do not import it in the global level.
try:
import mediapipe as mp
except ModuleNotFoundError as e:
raise ImportError(
"MediaPipe is an optional detector, ensure the library is installed."
"Please install using 'pip install mediapipe' "
) from e
mp_face_detection = mp.solutions.face_detection
face_detection = mp_face_detection.FaceDetection(min_detection_confidence=0.7)
return face_detection
def detect_face(face_detector: Any, img: np.ndarray, align: bool = True) -> list:
"""
Detect and align face with mediapipe
Args:
face_detector (Any): mediapipe face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
resp = []
img_width = img.shape[1]
img_height = img.shape[0]
results = face_detector.process(img)
# If no face has been detected, return an empty list
if results.detections is None:
return resp
# Extract the bounding box, the landmarks and the confidence score
for detection in results.detections:
(confidence,) = detection.score
bounding_box = detection.location_data.relative_bounding_box
landmarks = detection.location_data.relative_keypoints
x = int(bounding_box.xmin * img_width)
w = int(bounding_box.width * img_width)
y = int(bounding_box.ymin * img_height)
h = int(bounding_box.height * img_height)
# Extract landmarks
left_eye = (int(landmarks[0].x * img_width), int(landmarks[0].y * img_height))
right_eye = (int(landmarks[1].x * img_width), int(landmarks[1].y * img_height))
# nose = (int(landmarks[2].x * img_width), int(landmarks[2].y * img_height))
# mouth = (int(landmarks[3].x * img_width), int(landmarks[3].y * img_height))
# right_ear = (int(landmarks[4].x * img_width), int(landmarks[4].y * img_height))
# left_ear = (int(landmarks[5].x * img_width), int(landmarks[5].y * img_height))
if x > 0 and y > 0:
detected_face = img[y : y + h, x : x + w]
img_region = [x, y, w, h]
if align:
detected_face = FaceDetector.alignment_procedure(detected_face, left_eye, right_eye)
resp.append((detected_face, img_region, confidence))
return resp

View File

@ -0,0 +1,54 @@
from typing import Any
import cv2
import numpy as np
from deepface.detectors import FaceDetector
def build_model() -> Any:
"""
Build a mtcnn face detector model
Returns:
model (Any)
"""
from mtcnn import MTCNN
face_detector = MTCNN()
return face_detector
def detect_face(face_detector: Any, img: np.ndarray, align: bool = True) -> list:
"""
Detect and align face with mtcnn
Args:
face_detector (mtcnn.MTCNN): mtcnn face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
resp = []
detected_face = None
img_region = [0, 0, img.shape[1], img.shape[0]]
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # mtcnn expects RGB but OpenCV read BGR
detections = face_detector.detect_faces(img_rgb)
if len(detections) > 0:
for detection in detections:
x, y, w, h = detection["box"]
detected_face = img[int(y) : int(y + h), int(x) : int(x + w)]
img_region = [x, y, w, h]
confidence = detection["confidence"]
if align:
keypoints = detection["keypoints"]
left_eye = keypoints["left_eye"]
right_eye = keypoints["right_eye"]
detected_face = FaceDetector.alignment_procedure(detected_face, left_eye, right_eye)
resp.append((detected_face, img_region, confidence))
return resp

View File

@ -0,0 +1,158 @@
import os
from typing import Any
import cv2
import numpy as np
from deepface.detectors import FaceDetector
def build_model() -> dict:
"""
Build a opencv face&eye detector models
Returns:
model (Any)
"""
detector = {}
detector["face_detector"] = build_cascade("haarcascade")
detector["eye_detector"] = build_cascade("haarcascade_eye")
return detector
def build_cascade(model_name="haarcascade") -> Any:
"""
Build a opencv face&eye detector models
Returns:
model (Any)
"""
opencv_path = get_opencv_path()
if model_name == "haarcascade":
face_detector_path = opencv_path + "haarcascade_frontalface_default.xml"
if os.path.isfile(face_detector_path) != True:
raise ValueError(
"Confirm that opencv is installed on your environment! Expected path ",
face_detector_path,
" violated.",
)
detector = cv2.CascadeClassifier(face_detector_path)
elif model_name == "haarcascade_eye":
eye_detector_path = opencv_path + "haarcascade_eye.xml"
if os.path.isfile(eye_detector_path) != True:
raise ValueError(
"Confirm that opencv is installed on your environment! Expected path ",
eye_detector_path,
" violated.",
)
detector = cv2.CascadeClassifier(eye_detector_path)
else:
raise ValueError(f"unimplemented model_name for build_cascade - {model_name}")
return detector
def detect_face(detector: dict, img: np.ndarray, align: bool = True) -> list:
"""
Detect and align face with opencv
Args:
face_detector (Any): opencv face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
resp = []
detected_face = None
img_region = [0, 0, img.shape[1], img.shape[0]]
faces = []
try:
# faces = detector["face_detector"].detectMultiScale(img, 1.3, 5)
# note that, by design, opencv's haarcascade scores are >0 but not capped at 1
faces, _, scores = detector["face_detector"].detectMultiScale3(
img, 1.1, 10, outputRejectLevels=True
)
except:
pass
if len(faces) > 0:
for (x, y, w, h), confidence in zip(faces, scores):
detected_face = img[int(y) : int(y + h), int(x) : int(x + w)]
if align:
detected_face = align_face(detector["eye_detector"], detected_face)
img_region = [x, y, w, h]
resp.append((detected_face, img_region, confidence))
return resp
def align_face(eye_detector: Any, img: np.ndarray) -> np.ndarray:
"""
Align a given image with the pre-built eye_detector
Args:
eye_detector (Any): cascade classifier object
img (np.ndarray): given image
Returns:
aligned_img (np.ndarray)
"""
# if image has unexpectedly 0 dimension then skip alignment
if img.shape[0] == 0 or img.shape[1] == 0:
return img
detected_face_gray = cv2.cvtColor(
img, cv2.COLOR_BGR2GRAY
) # eye detector expects gray scale image
# eyes = eye_detector.detectMultiScale(detected_face_gray, 1.3, 5)
eyes = eye_detector.detectMultiScale(detected_face_gray, 1.1, 10)
# ----------------------------------------------------------------
# opencv eye detectin module is not strong. it might find more than 2 eyes!
# besides, it returns eyes with different order in each call (issue 435)
# this is an important issue because opencv is the default detector and ssd also uses this
# find the largest 2 eye. Thanks to @thelostpeace
eyes = sorted(eyes, key=lambda v: abs(v[2] * v[3]), reverse=True)
# ----------------------------------------------------------------
if len(eyes) >= 2:
# decide left and right eye
eye_1 = eyes[0]
eye_2 = eyes[1]
if eye_1[0] < eye_2[0]:
left_eye = eye_1
right_eye = eye_2
else:
left_eye = eye_2
right_eye = eye_1
# -----------------------
# find center of eyes
left_eye = (int(left_eye[0] + (left_eye[2] / 2)), int(left_eye[1] + (left_eye[3] / 2)))
right_eye = (int(right_eye[0] + (right_eye[2] / 2)), int(right_eye[1] + (right_eye[3] / 2)))
img = FaceDetector.alignment_procedure(img, left_eye, right_eye)
return img # return img anyway
def get_opencv_path() -> str:
"""
Returns where opencv installed
Returns:
installation_path (str)
"""
opencv_home = cv2.__file__
folders = opencv_home.split(os.path.sep)[0:-1]
path = folders[0]
for folder in folders[1:]:
path = path + "/" + folder
return path + "/data/"

View File

@ -0,0 +1,60 @@
from typing import Any
import numpy as np
from retinaface import RetinaFace
from retinaface.commons import postprocess
def build_model() -> Any:
"""
Build a retinaface detector model
Returns:
model (Any)
"""
face_detector = RetinaFace.build_model()
return face_detector
def detect_face(face_detector: Any, img: np.ndarray, align: bool = True) -> list:
"""
Detect and align face with retinaface
Args:
face_detector (Any): retinaface face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
resp = []
obj = RetinaFace.detect_faces(img, model=face_detector, threshold=0.9)
if isinstance(obj, dict):
for face_idx in obj.keys():
identity = obj[face_idx]
facial_area = identity["facial_area"]
y = facial_area[1]
h = facial_area[3] - y
x = facial_area[0]
w = facial_area[2] - x
img_region = [x, y, w, h]
confidence = identity["score"]
# detected_face = img[int(y):int(y+h), int(x):int(x+w)] #opencv
detected_face = img[facial_area[1] : facial_area[3], facial_area[0] : facial_area[2]]
if align:
landmarks = identity["landmarks"]
left_eye = landmarks["left_eye"]
right_eye = landmarks["right_eye"]
nose = landmarks["nose"]
# mouth_right = landmarks["mouth_right"]
# mouth_left = landmarks["mouth_left"]
detected_face = postprocess.alignment_procedure(
detected_face, right_eye, left_eye, nose
)
resp.append((detected_face, img_region, confidence))
return resp

View File

@ -0,0 +1,137 @@
import os
import gdown
import cv2
import pandas as pd
import numpy as np
from deepface.detectors import OpenCvWrapper
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="detectors.SsdWrapper")
# pylint: disable=line-too-long
def build_model() -> dict:
"""
Build a ssd detector model
Returns:
model (Any)
"""
home = functions.get_deepface_home()
# model structure
if os.path.isfile(home + "/.deepface/weights/deploy.prototxt") != True:
logger.info("deploy.prototxt will be downloaded...")
url = "https://github.com/opencv/opencv/raw/3.4.0/samples/dnn/face_detector/deploy.prototxt"
output = home + "/.deepface/weights/deploy.prototxt"
gdown.download(url, output, quiet=False)
# pre-trained weights
if os.path.isfile(home + "/.deepface/weights/res10_300x300_ssd_iter_140000.caffemodel") != True:
logger.info("res10_300x300_ssd_iter_140000.caffemodel will be downloaded...")
url = "https://github.com/opencv/opencv_3rdparty/raw/dnn_samples_face_detector_20170830/res10_300x300_ssd_iter_140000.caffemodel"
output = home + "/.deepface/weights/res10_300x300_ssd_iter_140000.caffemodel"
gdown.download(url, output, quiet=False)
try:
face_detector = cv2.dnn.readNetFromCaffe(
home + "/.deepface/weights/deploy.prototxt",
home + "/.deepface/weights/res10_300x300_ssd_iter_140000.caffemodel",
)
except Exception as err:
raise ValueError(
"Exception while calling opencv.dnn module."
+ "This is an optional dependency."
+ "You can install it as pip install opencv-contrib-python."
) from err
eye_detector = OpenCvWrapper.build_cascade("haarcascade_eye")
detector = {}
detector["face_detector"] = face_detector
detector["eye_detector"] = eye_detector
return detector
def detect_face(detector: dict, img: np.ndarray, align: bool = True) -> list:
"""
Detect and align face with ssd
Args:
face_detector (Any): ssd face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
resp = []
detected_face = None
img_region = [0, 0, img.shape[1], img.shape[0]]
ssd_labels = ["img_id", "is_face", "confidence", "left", "top", "right", "bottom"]
target_size = (300, 300)
base_img = img.copy() # we will restore base_img to img later
original_size = img.shape
img = cv2.resize(img, target_size)
aspect_ratio_x = original_size[1] / target_size[1]
aspect_ratio_y = original_size[0] / target_size[0]
imageBlob = cv2.dnn.blobFromImage(image=img)
face_detector = detector["face_detector"]
face_detector.setInput(imageBlob)
detections = face_detector.forward()
detections_df = pd.DataFrame(detections[0][0], columns=ssd_labels)
detections_df = detections_df[detections_df["is_face"] == 1] # 0: background, 1: face
detections_df = detections_df[detections_df["confidence"] >= 0.90]
detections_df["left"] = (detections_df["left"] * 300).astype(int)
detections_df["bottom"] = (detections_df["bottom"] * 300).astype(int)
detections_df["right"] = (detections_df["right"] * 300).astype(int)
detections_df["top"] = (detections_df["top"] * 300).astype(int)
if detections_df.shape[0] > 0:
for _, instance in detections_df.iterrows():
left = instance["left"]
right = instance["right"]
bottom = instance["bottom"]
top = instance["top"]
detected_face = base_img[
int(top * aspect_ratio_y) : int(bottom * aspect_ratio_y),
int(left * aspect_ratio_x) : int(right * aspect_ratio_x),
]
img_region = [
int(left * aspect_ratio_x),
int(top * aspect_ratio_y),
int(right * aspect_ratio_x) - int(left * aspect_ratio_x),
int(bottom * aspect_ratio_y) - int(top * aspect_ratio_y),
]
confidence = instance["confidence"]
if align:
detected_face = OpenCvWrapper.align_face(detector["eye_detector"], detected_face)
resp.append((detected_face, img_region, confidence))
return resp

View File

@ -0,0 +1,90 @@
from typing import Any
import numpy as np
from deepface.detectors import FaceDetector
from deepface.commons.logger import Logger
logger = Logger()
# Model's weights paths
PATH = "/.deepface/weights/yolov8n-face.pt"
# Google Drive URL
WEIGHT_URL = "https://drive.google.com/uc?id=1qcr9DbgsX3ryrz2uU8w4Xm3cOrRywXqb"
# Confidence thresholds for landmarks detection
# used in alignment_procedure function
LANDMARKS_CONFIDENCE_THRESHOLD = 0.5
def build_model() -> Any:
"""
Build a yolo detector model
Returns:
model (Any)
"""
import gdown
import os
# Import the Ultralytics YOLO model
try:
from ultralytics import YOLO
except ModuleNotFoundError as e:
raise ImportError(
"Yolo is an optional detector, ensure the library is installed. \
Please install using 'pip install ultralytics' "
) from e
from deepface.commons.functions import get_deepface_home
weight_path = f"{get_deepface_home()}{PATH}"
# Download the model's weights if they don't exist
if not os.path.isfile(weight_path):
gdown.download(WEIGHT_URL, weight_path, quiet=False)
logger.info(f"Downloaded YOLO model {os.path.basename(weight_path)}")
# Return face_detector
return YOLO(weight_path)
def detect_face(face_detector: Any, img: np.ndarray, align: bool = False) -> list:
"""
Detect and align face with yolo
Args:
face_detector (Any): yolo face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
resp = []
# Detect faces
results = face_detector.predict(img, verbose=False, show=False, conf=0.25)[0]
# For each face, extract the bounding box, the landmarks and confidence
for result in results:
# Extract the bounding box and the confidence
x, y, w, h = result.boxes.xywh.tolist()[0]
confidence = result.boxes.conf.tolist()[0]
x, y, w, h = int(x - w / 2), int(y - h / 2), int(w), int(h)
detected_face = img[y : y + h, x : x + w].copy()
if align:
# Tuple of x,y and confidence for left eye
left_eye = result.keypoints.xy[0][0], result.keypoints.conf[0][0]
# Tuple of x,y and confidence for right eye
right_eye = result.keypoints.xy[0][1], result.keypoints.conf[0][1]
# Check the landmarks confidence before alignment
if (
left_eye[1] > LANDMARKS_CONFIDENCE_THRESHOLD
and right_eye[1] > LANDMARKS_CONFIDENCE_THRESHOLD
):
detected_face = FaceDetector.alignment_procedure(
detected_face, left_eye[0].cpu(), right_eye[0].cpu()
)
resp.append((detected_face, [x, y, w, h], confidence))
return resp

View File

@ -0,0 +1,114 @@
import os
from typing import Any
import cv2
import numpy as np
import gdown
from deepface.detectors import FaceDetector
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="detectors.YunetWrapper")
def build_model() -> Any:
"""
Build a yunet detector model
Returns:
model (Any)
"""
# pylint: disable=C0301
url = "https://github.com/opencv/opencv_zoo/raw/main/models/face_detection_yunet/face_detection_yunet_2023mar.onnx"
file_name = "face_detection_yunet_2023mar.onnx"
home = functions.get_deepface_home()
if os.path.isfile(home + f"/.deepface/weights/{file_name}") is False:
logger.info(f"{file_name} will be downloaded...")
output = home + f"/.deepface/weights/{file_name}"
gdown.download(url, output, quiet=False)
try:
face_detector = cv2.FaceDetectorYN_create(
home + f"/.deepface/weights/{file_name}", "", (0, 0)
)
except Exception as err:
raise ValueError(
"Exception while calling opencv.FaceDetectorYN_create module."
+ "This is an optional dependency."
+ "You can install it as pip install opencv-contrib-python."
) from err
return face_detector
def detect_face(
detector: Any, image: np.ndarray, align: bool = True, score_threshold: float = 0.9
) -> list:
"""
Detect and align face with yunet
Args:
face_detector (Any): yunet face detector object
img (np.ndarray): pre-loaded image
align (bool): default is true
Returns:
list of detected and aligned faces
"""
# FaceDetector.detect_faces does not support score_threshold parameter.
# We can set it via environment variable.
score_threshold = os.environ.get("yunet_score_threshold", score_threshold)
resp = []
detected_face = None
img_region = [0, 0, image.shape[1], image.shape[0]]
faces = []
height, width = image.shape[0], image.shape[1]
# resize image if it is too large (Yunet fails to detect faces on large input sometimes)
# I picked 640 as a threshold because it is the default value of max_size in Yunet.
resized = False
if height > 640 or width > 640:
r = 640.0 / max(height, width)
original_image = image.copy()
image = cv2.resize(image, (int(width * r), int(height * r)))
height, width = image.shape[0], image.shape[1]
resized = True
detector.setInputSize((width, height))
detector.setScoreThreshold(score_threshold)
_, faces = detector.detect(image)
if faces is None:
return resp
for face in faces:
# pylint: disable=W0105
"""
The detection output faces is a two-dimension array of type CV_32F,
whose rows are the detected face instances, columns are the location
of a face and 5 facial landmarks.
The format of each row is as follows:
x1, y1, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt,
x_rcm, y_rcm, x_lcm, y_lcm,
where x1, y1, w, h are the top-left coordinates, width and height of
the face bounding box,
{x, y}_{re, le, nt, rcm, lcm} stands for the coordinates of right eye,
left eye, nose tip, the right corner and left corner of the mouth respectively.
"""
(x, y, w, h, x_re, y_re, x_le, y_le) = list(map(int, face[:8]))
# Yunet returns negative coordinates if it thinks part of
# the detected face is outside the frame.
# We set the coordinate to 0 if they are negative.
x = max(x, 0)
y = max(y, 0)
if resized:
image = original_image
x, y, w, h = int(x / r), int(y / r), int(w / r), int(h / r)
x_re, y_re, x_le, y_le = (
int(x_re / r),
int(y_re / r),
int(x_le / r),
int(y_le / r),
)
confidence = face[-1]
confidence = f"{confidence:.2f}"
detected_face = image[int(y) : int(y + h), int(x) : int(x + w)]
img_region = [x, y, w, h]
if align:
detected_face = FaceDetector.alignment_procedure(
detected_face, (x_re, y_re), (x_le, y_le)
)
resp.append((detected_face, img_region, confidence))
return resp

View File

@ -0,0 +1,66 @@
import os
import gdown
import numpy as np
import tensorflow as tf
from deepface.basemodels import VGGFace
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="extendedmodels.Age")
# ----------------------------------------
# dependency configurations
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model, Sequential
from keras.layers import Convolution2D, Flatten, Activation
else:
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Convolution2D, Flatten, Activation
# ----------------------------------------
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/age_model_weights.h5",
) -> Model:
model = VGGFace.baseModel()
# --------------------------
classes = 101
base_model_output = Sequential()
base_model_output = Convolution2D(classes, (1, 1), name="predictions")(model.layers[-4].output)
base_model_output = Flatten()(base_model_output)
base_model_output = Activation("softmax")(base_model_output)
# --------------------------
age_model = Model(inputs=model.input, outputs=base_model_output)
# --------------------------
# load weights
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/age_model_weights.h5") != True:
logger.info("age_model_weights.h5 will be downloaded...")
output = home + "/.deepface/weights/age_model_weights.h5"
gdown.download(url, output, quiet=False)
age_model.load_weights(home + "/.deepface/weights/age_model_weights.h5")
return age_model
# --------------------------
def findApparentAge(age_predictions) -> np.float64:
output_indexes = np.array(list(range(0, 101)))
apparent_age = np.sum(age_predictions * output_indexes)
return apparent_age

View File

@ -0,0 +1,78 @@
import os
import gdown
import tensorflow as tf
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="extendedmodels.Emotion")
# -------------------------------------------
# pylint: disable=line-too-long
# -------------------------------------------
# dependency configuration
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Flatten, Dense, Dropout
else:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
Conv2D,
MaxPooling2D,
AveragePooling2D,
Flatten,
Dense,
Dropout,
)
# -------------------------------------------
# Labels for the emotions that can be detected by the model.
labels = ["angry", "disgust", "fear", "happy", "sad", "surprise", "neutral"]
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/facial_expression_model_weights.h5",
) -> Sequential:
num_classes = 7
model = Sequential()
# 1st convolution layer
model.add(Conv2D(64, (5, 5), activation="relu", input_shape=(48, 48, 1)))
model.add(MaxPooling2D(pool_size=(5, 5), strides=(2, 2)))
# 2nd convolution layer
model.add(Conv2D(64, (3, 3), activation="relu"))
model.add(Conv2D(64, (3, 3), activation="relu"))
model.add(AveragePooling2D(pool_size=(3, 3), strides=(2, 2)))
# 3rd convolution layer
model.add(Conv2D(128, (3, 3), activation="relu"))
model.add(Conv2D(128, (3, 3), activation="relu"))
model.add(AveragePooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(Flatten())
# fully connected neural networks
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(1024, activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation="softmax"))
# ----------------------------
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/facial_expression_model_weights.h5") != True:
logger.info("facial_expression_model_weights.h5 will be downloaded...")
output = home + "/.deepface/weights/facial_expression_model_weights.h5"
gdown.download(url, output, quiet=False)
model.load_weights(home + "/.deepface/weights/facial_expression_model_weights.h5")
return model

View File

@ -0,0 +1,61 @@
import os
import gdown
import tensorflow as tf
from deepface.basemodels import VGGFace
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="extendedmodels.Gender")
# -------------------------------------
# pylint: disable=line-too-long
# -------------------------------------
# dependency configurations
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model, Sequential
from keras.layers import Convolution2D, Flatten, Activation
else:
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Convolution2D, Flatten, Activation
# -------------------------------------
# Labels for the genders that can be detected by the model.
labels = ["Woman", "Man"]
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/gender_model_weights.h5",
) -> Model:
model = VGGFace.baseModel()
# --------------------------
classes = 2
base_model_output = Sequential()
base_model_output = Convolution2D(classes, (1, 1), name="predictions")(model.layers[-4].output)
base_model_output = Flatten()(base_model_output)
base_model_output = Activation("softmax")(base_model_output)
# --------------------------
gender_model = Model(inputs=model.input, outputs=base_model_output)
# --------------------------
# load weights
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/gender_model_weights.h5") != True:
logger.info("gender_model_weights.h5 will be downloaded...")
output = home + "/.deepface/weights/gender_model_weights.h5"
gdown.download(url, output, quiet=False)
gender_model.load_weights(home + "/.deepface/weights/gender_model_weights.h5")
return gender_model

View File

@ -0,0 +1,59 @@
import os
import gdown
import tensorflow as tf
from deepface.basemodels import VGGFace
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="extendedmodels.Race")
# --------------------------
# pylint: disable=line-too-long
# --------------------------
# dependency configurations
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 1:
from keras.models import Model, Sequential
from keras.layers import Convolution2D, Flatten, Activation
else:
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Convolution2D, Flatten, Activation
# --------------------------
# Labels for the ethnic phenotypes that can be detected by the model.
labels = ["asian", "indian", "black", "white", "middle eastern", "latino hispanic"]
def loadModel(
url="https://github.com/serengil/deepface_models/releases/download/v1.0/race_model_single_batch.h5",
) -> Model:
model = VGGFace.baseModel()
# --------------------------
classes = 6
base_model_output = Sequential()
base_model_output = Convolution2D(classes, (1, 1), name="predictions")(model.layers[-4].output)
base_model_output = Flatten()(base_model_output)
base_model_output = Activation("softmax")(base_model_output)
# --------------------------
race_model = Model(inputs=model.input, outputs=base_model_output)
# --------------------------
# load weights
home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/race_model_single_batch.h5") != True:
logger.info("race_model_single_batch.h5 will be downloaded...")
output = home + "/.deepface/weights/race_model_single_batch.h5"
gdown.download(url, output, quiet=False)
race_model.load_weights(home + "/.deepface/weights/race_model_single_batch.h5")
return race_model

View File

@ -0,0 +1,184 @@
# built-in dependencies
from typing import Any, Dict, List, Union
# 3rd party dependencies
import numpy as np
from tqdm import tqdm
import cv2
# project dependencies
from deepface.modules import modeling
from deepface.commons import functions
from deepface.extendedmodels import Age, Gender, Race, Emotion
def analyze(
img_path: Union[str, np.ndarray],
actions: Union[tuple, list] = ("emotion", "age", "gender", "race"),
enforce_detection: bool = True,
detector_backend: str = "opencv",
align: bool = True,
silent: bool = False,
) -> List[Dict[str, Any]]:
"""
This function analyzes facial attributes including age, gender, emotion and race.
In the background, analysis function builds convolutional neural network models to
classify age, gender, emotion and race of the input image.
Parameters:
img_path: exact image path, numpy array (BGR) or base64 encoded image could be passed.
If source image has more than one face, then result will be size of number of faces
appearing in the image.
actions (tuple): The default is ('age', 'gender', 'emotion', 'race'). You can drop
some of those attributes.
enforce_detection (bool): The function throws exception if no face detected by default.
Set this to False if you don't want to get exception. This might be convenient for low
resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8.
align (boolean): alignment according to the eye positions.
silent (boolean): disable (some) log messages
Returns:
The function returns a list of dictionaries for each face appearing in the image.
[
{
"region": {'x': 230, 'y': 120, 'w': 36, 'h': 45},
"age": 28.66,
'face_confidence': 0.9993908405303955,
"dominant_gender": "Woman",
"gender": {
'Woman': 99.99407529830933,
'Man': 0.005928758764639497,
}
"dominant_emotion": "neutral",
"emotion": {
'sad': 37.65260875225067,
'angry': 0.15512987738475204,
'surprise': 0.0022171278033056296,
'fear': 1.2489334680140018,
'happy': 4.609785228967667,
'disgust': 9.698561953541684e-07,
'neutral': 56.33133053779602
}
"dominant_race": "white",
"race": {
'indian': 0.5480832420289516,
'asian': 0.7830780930817127,
'latino hispanic': 2.0677512511610985,
'black': 0.06337375962175429,
'middle eastern': 3.088453598320484,
'white': 93.44925880432129
}
}
]
"""
# ---------------------------------
# validate actions
if isinstance(actions, str):
actions = (actions,)
# check if actions is not an iterable or empty.
if not hasattr(actions, "__getitem__") or not actions:
raise ValueError("`actions` must be a list of strings.")
actions = list(actions)
# For each action, check if it is valid
for action in actions:
if action not in ("emotion", "age", "gender", "race"):
raise ValueError(
f"Invalid action passed ({repr(action)})). "
"Valid actions are `emotion`, `age`, `gender`, `race`."
)
# ---------------------------------
resp_objects = []
img_objs = functions.extract_faces(
img=img_path,
target_size=(224, 224),
detector_backend=detector_backend,
grayscale=False,
enforce_detection=enforce_detection,
align=align,
)
for img_content, img_region, img_confidence in img_objs:
if img_content.shape[0] > 0 and img_content.shape[1] > 0:
obj = {}
# facial attribute analysis
pbar = tqdm(
range(0, len(actions)),
desc="Finding actions",
disable=silent if len(actions) > 1 else True,
)
for index in pbar:
action = actions[index]
pbar.set_description(f"Action: {action}")
if action == "emotion":
img_gray = cv2.cvtColor(img_content[0], cv2.COLOR_BGR2GRAY)
img_gray = cv2.resize(img_gray, (48, 48))
img_gray = np.expand_dims(img_gray, axis=0)
emotion_predictions = modeling.build_model("Emotion").predict(
img_gray, verbose=0
)[0, :]
sum_of_predictions = emotion_predictions.sum()
obj["emotion"] = {}
for i, emotion_label in enumerate(Emotion.labels):
emotion_prediction = 100 * emotion_predictions[i] / sum_of_predictions
obj["emotion"][emotion_label] = emotion_prediction
obj["dominant_emotion"] = Emotion.labels[np.argmax(emotion_predictions)]
elif action == "age":
age_predictions = modeling.build_model("Age").predict(img_content, verbose=0)[
0, :
]
apparent_age = Age.findApparentAge(age_predictions)
# int cast is for exception - object of type 'float32' is not JSON serializable
obj["age"] = int(apparent_age)
elif action == "gender":
gender_predictions = modeling.build_model("Gender").predict(
img_content, verbose=0
)[0, :]
obj["gender"] = {}
for i, gender_label in enumerate(Gender.labels):
gender_prediction = 100 * gender_predictions[i]
obj["gender"][gender_label] = gender_prediction
obj["dominant_gender"] = Gender.labels[np.argmax(gender_predictions)]
elif action == "race":
race_predictions = modeling.build_model("Race").predict(img_content, verbose=0)[
0, :
]
sum_of_predictions = race_predictions.sum()
obj["race"] = {}
for i, race_label in enumerate(Race.labels):
race_prediction = 100 * race_predictions[i] / sum_of_predictions
obj["race"][race_label] = race_prediction
obj["dominant_race"] = Race.labels[np.argmax(race_predictions)]
# -----------------------------
# mention facial areas
obj["region"] = img_region
# include image confidence
obj["face_confidence"] = img_confidence
resp_objects.append(obj)
return resp_objects

View File

@ -0,0 +1,72 @@
# built-in dependencies
from typing import Any, Dict, List, Tuple, Union
# 3rd part dependencies
import numpy as np
# project dependencies
from deepface.commons import functions
def extract_faces(
img_path: Union[str, np.ndarray],
target_size: Tuple[int, int] = (224, 224),
detector_backend: str = "opencv",
enforce_detection: bool = True,
align: bool = True,
grayscale: bool = False,
) -> List[Dict[str, Any]]:
"""
This function applies pre-processing stages of a face recognition pipeline
including detection and alignment
Parameters:
img_path: exact image path, numpy array (BGR) or base64 encoded image.
Source image can have many face. Then, result will be the size of number
of faces appearing in that source image.
target_size (tuple): final shape of facial image. black pixels will be
added to resize the image.
detector_backend (string): face detection backends are retinaface, mtcnn,
opencv, ssd or dlib
enforce_detection (boolean): function throws exception if face cannot be
detected in the fed image. Set this to False if you do not want to get
an exception and run the function anyway.
align (boolean): alignment according to the eye positions.
grayscale (boolean): extracting faces in rgb or gray scale
Returns:
list of dictionaries. Each dictionary will have facial image itself (RGB),
extracted area from the original image and confidence score.
"""
resp_objs = []
img_objs = functions.extract_faces(
img=img_path,
target_size=target_size,
detector_backend=detector_backend,
grayscale=grayscale,
enforce_detection=enforce_detection,
align=align,
)
for img, region, confidence in img_objs:
resp_obj = {}
# discard expanded dimension
if len(img.shape) == 4:
img = img[0]
# bgr to rgb
resp_obj["face"] = img[:, :, ::-1]
resp_obj["facial_area"] = region
resp_obj["confidence"] = confidence
resp_objs.append(resp_obj)
return resp_objs

View File

@ -0,0 +1,71 @@
# built-in dependencies
from typing import Any, Union
# 3rd party dependencies
import tensorflow as tf
# project dependencies
from deepface.basemodels import (
VGGFace,
OpenFace,
Facenet,
Facenet512,
FbDeepFace,
DeepID,
DlibWrapper,
ArcFace,
SFace,
)
from deepface.extendedmodels import Age, Gender, Race, Emotion
# conditional dependencies
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 2:
from tensorflow.keras.models import Model
else:
from keras.models import Model
def build_model(model_name: str) -> Union[Model, Any]:
"""
This function builds a deepface model
Parameters:
model_name (string): face recognition or facial attribute model
VGG-Face, Facenet, OpenFace, DeepFace, DeepID for face recognition
Age, Gender, Emotion, Race for facial attributes
Returns:
built deepface model ( (tf.)keras.models.Model )
"""
# singleton design pattern
global model_obj
models = {
"VGG-Face": VGGFace.loadModel,
"OpenFace": OpenFace.loadModel,
"Facenet": Facenet.loadModel,
"Facenet512": Facenet512.loadModel,
"DeepFace": FbDeepFace.loadModel,
"DeepID": DeepID.loadModel,
"Dlib": DlibWrapper.loadModel,
"ArcFace": ArcFace.loadModel,
"SFace": SFace.load_model,
"Emotion": Emotion.loadModel,
"Age": Age.loadModel,
"Gender": Gender.loadModel,
"Race": Race.loadModel,
}
if not "model_obj" in globals():
model_obj = {}
if not model_name in model_obj:
model = models.get(model_name)
if model:
model = model()
model_obj[model_name] = model
else:
raise ValueError(f"Invalid model_name passed - {model_name}")
return model_obj[model_name]

View File

@ -0,0 +1,717 @@
import os
import time
import numpy as np
import pandas as pd
import cv2
from deepface import DeepFace
from deepface.commons import functions
from deepface.commons.logger import Logger
logger = Logger(module="commons.realtime")
# dependency configuration
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
# pylint: disable=too-many-nested-blocks
def analysis(
db_path,
model_name="VGG-Face",
detector_backend="opencv",
distance_metric="cosine",
enable_face_analysis=True,
source=0,
time_threshold=5,
frame_threshold=5,
):
# global variables
text_color = (255, 255, 255)
pivot_img_size = 112 # face recognition result image
enable_emotion = True
enable_age_gender = True
# ------------------------
# find custom values for this input set
target_size = functions.find_target_size(model_name=model_name)
# ------------------------
# build models once to store them in the memory
# otherwise, they will be built after cam started and this will cause delays
DeepFace.build_model(model_name=model_name)
logger.info(f"facial recognition model {model_name} is just built")
if enable_face_analysis:
DeepFace.build_model(model_name="Age")
logger.info("Age model is just built")
DeepFace.build_model(model_name="Gender")
logger.info("Gender model is just built")
DeepFace.build_model(model_name="Emotion")
logger.info("Emotion model is just built")
# -----------------------
# call a dummy find function for db_path once to create embeddings in the initialization
DeepFace.find(
img_path=np.zeros([224, 224, 3]),
db_path=db_path,
model_name=model_name,
detector_backend=detector_backend,
distance_metric=distance_metric,
enforce_detection=False,
)
# -----------------------
# visualization
freeze = False
face_detected = False
face_included_frames = 0 # freeze screen if face detected sequantially 5 frames
freezed_frame = 0
tic = time.time()
cap = cv2.VideoCapture(source) # webcam
while True:
_, img = cap.read()
if img is None:
break
# cv2.namedWindow('img', cv2.WINDOW_FREERATIO)
# cv2.setWindowProperty('img', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
raw_img = img.copy()
resolution_x = img.shape[1]
resolution_y = img.shape[0]
if freeze == False:
try:
# just extract the regions to highlight in webcam
face_objs = DeepFace.extract_faces(
img_path=img,
target_size=target_size,
detector_backend=detector_backend,
enforce_detection=False,
)
faces = []
for face_obj in face_objs:
facial_area = face_obj["facial_area"]
faces.append(
(
facial_area["x"],
facial_area["y"],
facial_area["w"],
facial_area["h"],
)
)
except: # to avoid exception if no face detected
faces = []
if len(faces) == 0:
face_included_frames = 0
else:
faces = []
detected_faces = []
face_index = 0
for x, y, w, h in faces:
if w > 130: # discard small detected faces
face_detected = True
if face_index == 0:
face_included_frames = (
face_included_frames + 1
) # increase frame for a single face
cv2.rectangle(
img, (x, y), (x + w, y + h), (67, 67, 67), 1
) # draw rectangle to main image
cv2.putText(
img,
str(frame_threshold - face_included_frames),
(int(x + w / 4), int(y + h / 1.5)),
cv2.FONT_HERSHEY_SIMPLEX,
4,
(255, 255, 255),
2,
)
detected_face = img[int(y) : int(y + h), int(x) : int(x + w)] # crop detected face
# -------------------------------------
detected_faces.append((x, y, w, h))
face_index = face_index + 1
# -------------------------------------
if face_detected == True and face_included_frames == frame_threshold and freeze == False:
freeze = True
# base_img = img.copy()
base_img = raw_img.copy()
detected_faces_final = detected_faces.copy()
tic = time.time()
if freeze == True:
toc = time.time()
if (toc - tic) < time_threshold:
if freezed_frame == 0:
freeze_img = base_img.copy()
# here, np.uint8 handles showing white area issue
# freeze_img = np.zeros(resolution, np.uint8)
for detected_face in detected_faces_final:
x = detected_face[0]
y = detected_face[1]
w = detected_face[2]
h = detected_face[3]
cv2.rectangle(
freeze_img, (x, y), (x + w, y + h), (67, 67, 67), 1
) # draw rectangle to main image
# -------------------------------
# extract detected face
custom_face = base_img[y : y + h, x : x + w]
# -------------------------------
# facial attribute analysis
if enable_face_analysis == True:
demographies = DeepFace.analyze(
img_path=custom_face,
detector_backend=detector_backend,
enforce_detection=False,
silent=True,
)
if len(demographies) > 0:
# directly access 1st face cos img is extracted already
demography = demographies[0]
if enable_emotion:
emotion = demography["emotion"]
emotion_df = pd.DataFrame(
emotion.items(), columns=["emotion", "score"]
)
emotion_df = emotion_df.sort_values(
by=["score"], ascending=False
).reset_index(drop=True)
# background of mood box
# transparency
overlay = freeze_img.copy()
opacity = 0.4
if x + w + pivot_img_size < resolution_x:
# right
cv2.rectangle(
freeze_img
# , (x+w,y+20)
,
(x + w, y),
(x + w + pivot_img_size, y + h),
(64, 64, 64),
cv2.FILLED,
)
cv2.addWeighted(
overlay, opacity, freeze_img, 1 - opacity, 0, freeze_img
)
elif x - pivot_img_size > 0:
# left
cv2.rectangle(
freeze_img
# , (x-pivot_img_size,y+20)
,
(x - pivot_img_size, y),
(x, y + h),
(64, 64, 64),
cv2.FILLED,
)
cv2.addWeighted(
overlay, opacity, freeze_img, 1 - opacity, 0, freeze_img
)
for index, instance in emotion_df.iterrows():
current_emotion = instance["emotion"]
emotion_label = f"{current_emotion} "
emotion_score = instance["score"] / 100
bar_x = 35 # this is the size if an emotion is 100%
bar_x = int(bar_x * emotion_score)
if x + w + pivot_img_size < resolution_x:
text_location_y = y + 20 + (index + 1) * 20
text_location_x = x + w
if text_location_y < y + h:
cv2.putText(
freeze_img,
emotion_label,
(text_location_x, text_location_y),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
(255, 255, 255),
1,
)
cv2.rectangle(
freeze_img,
(x + w + 70, y + 13 + (index + 1) * 20),
(
x + w + 70 + bar_x,
y + 13 + (index + 1) * 20 + 5,
),
(255, 255, 255),
cv2.FILLED,
)
elif x - pivot_img_size > 0:
text_location_y = y + 20 + (index + 1) * 20
text_location_x = x - pivot_img_size
if text_location_y <= y + h:
cv2.putText(
freeze_img,
emotion_label,
(text_location_x, text_location_y),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
(255, 255, 255),
1,
)
cv2.rectangle(
freeze_img,
(
x - pivot_img_size + 70,
y + 13 + (index + 1) * 20,
),
(
x - pivot_img_size + 70 + bar_x,
y + 13 + (index + 1) * 20 + 5,
),
(255, 255, 255),
cv2.FILLED,
)
if enable_age_gender:
apparent_age = demography["age"]
dominant_gender = demography["dominant_gender"]
gender = "M" if dominant_gender == "Man" else "W"
logger.debug(f"{apparent_age} years old {dominant_gender}")
analysis_report = str(int(apparent_age)) + " " + gender
# -------------------------------
info_box_color = (46, 200, 255)
# top
if y - pivot_img_size + int(pivot_img_size / 5) > 0:
triangle_coordinates = np.array(
[
(x + int(w / 2), y),
(
x + int(w / 2) - int(w / 10),
y - int(pivot_img_size / 3),
),
(
x + int(w / 2) + int(w / 10),
y - int(pivot_img_size / 3),
),
]
)
cv2.drawContours(
freeze_img,
[triangle_coordinates],
0,
info_box_color,
-1,
)
cv2.rectangle(
freeze_img,
(
x + int(w / 5),
y - pivot_img_size + int(pivot_img_size / 5),
),
(x + w - int(w / 5), y - int(pivot_img_size / 3)),
info_box_color,
cv2.FILLED,
)
cv2.putText(
freeze_img,
analysis_report,
(x + int(w / 3.5), y - int(pivot_img_size / 2.1)),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(0, 111, 255),
2,
)
# bottom
elif (
y + h + pivot_img_size - int(pivot_img_size / 5)
< resolution_y
):
triangle_coordinates = np.array(
[
(x + int(w / 2), y + h),
(
x + int(w / 2) - int(w / 10),
y + h + int(pivot_img_size / 3),
),
(
x + int(w / 2) + int(w / 10),
y + h + int(pivot_img_size / 3),
),
]
)
cv2.drawContours(
freeze_img,
[triangle_coordinates],
0,
info_box_color,
-1,
)
cv2.rectangle(
freeze_img,
(x + int(w / 5), y + h + int(pivot_img_size / 3)),
(
x + w - int(w / 5),
y + h + pivot_img_size - int(pivot_img_size / 5),
),
info_box_color,
cv2.FILLED,
)
cv2.putText(
freeze_img,
analysis_report,
(x + int(w / 3.5), y + h + int(pivot_img_size / 1.5)),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(0, 111, 255),
2,
)
# --------------------------------
# face recognition
# call find function for custom_face
dfs = DeepFace.find(
img_path=custom_face,
db_path=db_path,
model_name=model_name,
detector_backend=detector_backend,
distance_metric=distance_metric,
enforce_detection=False,
silent=True,
)
if len(dfs) > 0:
# directly access 1st item because custom face is extracted already
df = dfs[0]
if df.shape[0] > 0:
candidate = df.iloc[0]
label = candidate["identity"]
# to use this source image as is
display_img = cv2.imread(label)
# to use extracted face
source_objs = DeepFace.extract_faces(
img_path=label,
target_size=(pivot_img_size, pivot_img_size),
detector_backend=detector_backend,
enforce_detection=False,
align=False,
)
if len(source_objs) > 0:
# extract 1st item directly
source_obj = source_objs[0]
display_img = source_obj["face"]
display_img *= 255
display_img = display_img[:, :, ::-1]
# --------------------
label = label.split("/")[-1]
try:
if (
y - pivot_img_size > 0
and x + w + pivot_img_size < resolution_x
):
# top right
freeze_img[
y - pivot_img_size : y,
x + w : x + w + pivot_img_size,
] = display_img
overlay = freeze_img.copy()
opacity = 0.4
cv2.rectangle(
freeze_img,
(x + w, y),
(x + w + pivot_img_size, y + 20),
(46, 200, 255),
cv2.FILLED,
)
cv2.addWeighted(
overlay,
opacity,
freeze_img,
1 - opacity,
0,
freeze_img,
)
cv2.putText(
freeze_img,
label,
(x + w, y + 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
text_color,
1,
)
# connect face and text
cv2.line(
freeze_img,
(x + int(w / 2), y),
(x + 3 * int(w / 4), y - int(pivot_img_size / 2)),
(67, 67, 67),
1,
)
cv2.line(
freeze_img,
(x + 3 * int(w / 4), y - int(pivot_img_size / 2)),
(x + w, y - int(pivot_img_size / 2)),
(67, 67, 67),
1,
)
elif (
y + h + pivot_img_size < resolution_y
and x - pivot_img_size > 0
):
# bottom left
freeze_img[
y + h : y + h + pivot_img_size,
x - pivot_img_size : x,
] = display_img
overlay = freeze_img.copy()
opacity = 0.4
cv2.rectangle(
freeze_img,
(x - pivot_img_size, y + h - 20),
(x, y + h),
(46, 200, 255),
cv2.FILLED,
)
cv2.addWeighted(
overlay,
opacity,
freeze_img,
1 - opacity,
0,
freeze_img,
)
cv2.putText(
freeze_img,
label,
(x - pivot_img_size, y + h - 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
text_color,
1,
)
# connect face and text
cv2.line(
freeze_img,
(x + int(w / 2), y + h),
(
x + int(w / 2) - int(w / 4),
y + h + int(pivot_img_size / 2),
),
(67, 67, 67),
1,
)
cv2.line(
freeze_img,
(
x + int(w / 2) - int(w / 4),
y + h + int(pivot_img_size / 2),
),
(x, y + h + int(pivot_img_size / 2)),
(67, 67, 67),
1,
)
elif y - pivot_img_size > 0 and x - pivot_img_size > 0:
# top left
freeze_img[
y - pivot_img_size : y, x - pivot_img_size : x
] = display_img
overlay = freeze_img.copy()
opacity = 0.4
cv2.rectangle(
freeze_img,
(x - pivot_img_size, y),
(x, y + 20),
(46, 200, 255),
cv2.FILLED,
)
cv2.addWeighted(
overlay,
opacity,
freeze_img,
1 - opacity,
0,
freeze_img,
)
cv2.putText(
freeze_img,
label,
(x - pivot_img_size, y + 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
text_color,
1,
)
# connect face and text
cv2.line(
freeze_img,
(x + int(w / 2), y),
(
x + int(w / 2) - int(w / 4),
y - int(pivot_img_size / 2),
),
(67, 67, 67),
1,
)
cv2.line(
freeze_img,
(
x + int(w / 2) - int(w / 4),
y - int(pivot_img_size / 2),
),
(x, y - int(pivot_img_size / 2)),
(67, 67, 67),
1,
)
elif (
x + w + pivot_img_size < resolution_x
and y + h + pivot_img_size < resolution_y
):
# bottom righ
freeze_img[
y + h : y + h + pivot_img_size,
x + w : x + w + pivot_img_size,
] = display_img
overlay = freeze_img.copy()
opacity = 0.4
cv2.rectangle(
freeze_img,
(x + w, y + h - 20),
(x + w + pivot_img_size, y + h),
(46, 200, 255),
cv2.FILLED,
)
cv2.addWeighted(
overlay,
opacity,
freeze_img,
1 - opacity,
0,
freeze_img,
)
cv2.putText(
freeze_img,
label,
(x + w, y + h - 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
text_color,
1,
)
# connect face and text
cv2.line(
freeze_img,
(x + int(w / 2), y + h),
(
x + int(w / 2) + int(w / 4),
y + h + int(pivot_img_size / 2),
),
(67, 67, 67),
1,
)
cv2.line(
freeze_img,
(
x + int(w / 2) + int(w / 4),
y + h + int(pivot_img_size / 2),
),
(x + w, y + h + int(pivot_img_size / 2)),
(67, 67, 67),
1,
)
except Exception as err: # pylint: disable=broad-except
logger.error(str(err))
tic = time.time() # in this way, freezed image can show 5 seconds
# -------------------------------
time_left = int(time_threshold - (toc - tic) + 1)
cv2.rectangle(freeze_img, (10, 10), (90, 50), (67, 67, 67), -10)
cv2.putText(
freeze_img,
str(time_left),
(40, 40),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(255, 255, 255),
1,
)
cv2.imshow("img", freeze_img)
freezed_frame = freezed_frame + 1
else:
face_detected = False
face_included_frames = 0
freeze = False
freezed_frame = 0
else:
cv2.imshow("img", img)
if cv2.waitKey(1) & 0xFF == ord("q"): # press q to quit
break
# kill open cv things
cap.release()
cv2.destroyAllWindows()

View File

@ -0,0 +1,346 @@
# built-in dependencies
import os
import pickle
from typing import List, Union
import time
# 3rd party dependencies
import numpy as np
import pandas as pd
from tqdm import tqdm
# project dependencies
from deepface.commons import functions, distance as dst
from deepface.commons.logger import Logger
from deepface.modules import representation
logger = Logger(module="deepface/modules/recognition.py")
def find(
img_path: Union[str, np.ndarray],
db_path: str,
model_name: str = "VGG-Face",
distance_metric: str = "cosine",
enforce_detection: bool = True,
detector_backend: str = "opencv",
align: bool = True,
normalization: str = "base",
silent: bool = False,
) -> List[pd.DataFrame]:
"""
This function applies verification several times and find the identities in a database
Parameters:
img_path: exact image path, numpy array (BGR) or based64 encoded image.
Source image can have many faces. Then, result will be the size of number of
faces in the source image.
db_path (string): You should store some image files in a folder and pass the
exact folder path to this. A database image can also have many faces.
Then, all detected faces in db side will be considered in the decision.
model_name (string): VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID,
Dlib, ArcFace, SFace or Ensemble
distance_metric (string): cosine, euclidean, euclidean_l2
enforce_detection (bool): The function throws exception if a face could not be detected.
Set this to False if you don't want to get exception. This might be convenient for low
resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8.
align (boolean): alignment according to the eye positions.
normalization (string): normalize the input image before feeding to model
silent (boolean): disable some logging and progress bars
Returns:
This function returns list of pandas data frame. Each item of the list corresponding to
an identity in the img_path.
"""
tic = time.time()
# -------------------------------
if os.path.isdir(db_path) is not True:
raise ValueError("Passed db_path does not exist!")
target_size = functions.find_target_size(model_name=model_name)
# ---------------------------------------
file_name = f"representations_{model_name}.pkl"
file_name = file_name.replace("-", "_").lower()
datastore_path = f"{db_path}/{file_name}"
df_cols = [
"identity",
f"{model_name}_representation",
"target_x",
"target_y",
"target_w",
"target_h",
]
if os.path.exists(datastore_path):
with open(datastore_path, "rb") as f:
representations = pickle.load(f)
if len(representations) > 0 and len(representations[0]) != len(df_cols):
raise ValueError(
f"Seems existing {datastore_path} is out-of-the-date."
"Please delete it and re-run."
)
alpha_employees = __list_images(path=db_path)
beta_employees = [representation[0] for representation in representations]
newbies = list(set(alpha_employees) - set(beta_employees))
oldies = list(set(beta_employees) - set(alpha_employees))
if newbies:
logger.warn(
f"Items {newbies} were added into {db_path}"
f" just after data source {datastore_path} created!"
)
newbies_representations = __find_bulk_embeddings(
employees=newbies,
model_name=model_name,
target_size=target_size,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
normalization=normalization,
silent=silent,
)
representations = representations + newbies_representations
if oldies:
logger.warn(
f"Items {oldies} were dropped from {db_path}"
f" just after data source {datastore_path} created!"
)
representations = [rep for rep in representations if rep[0] not in oldies]
if newbies or oldies:
if len(representations) == 0:
raise ValueError(f"There is no image in {db_path} anymore!")
# save new representations
with open(datastore_path, "wb") as f:
pickle.dump(representations, f)
if not silent:
logger.info(
f"{len(newbies)} new representations are just added"
f" whereas {len(oldies)} represented one(s) are just dropped"
f" in {db_path}/{file_name} file."
)
if not silent:
logger.info(f"There are {len(representations)} representations found in {file_name}")
else: # create representation.pkl from scratch
employees = __list_images(path=db_path)
if len(employees) == 0:
raise ValueError(
f"There is no image in {db_path} folder!"
"Validate .jpg, .jpeg or .png files exist in this path.",
)
# ------------------------
# find representations for db images
representations = __find_bulk_embeddings(
employees=employees,
model_name=model_name,
target_size=target_size,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
normalization=normalization,
silent=silent,
)
# -------------------------------
with open(datastore_path, "wb") as f:
pickle.dump(representations, f)
if not silent:
logger.info(f"Representations stored in {db_path}/{file_name} file.")
# ----------------------------
# now, we got representations for facial database
df = pd.DataFrame(
representations,
columns=df_cols,
)
# img path might have more than once face
source_objs = functions.extract_faces(
img=img_path,
target_size=target_size,
detector_backend=detector_backend,
grayscale=False,
enforce_detection=enforce_detection,
align=align,
)
resp_obj = []
for source_img, source_region, _ in source_objs:
target_embedding_obj = representation.represent(
img_path=source_img,
model_name=model_name,
enforce_detection=enforce_detection,
detector_backend="skip",
align=align,
normalization=normalization,
)
target_representation = target_embedding_obj[0]["embedding"]
result_df = df.copy() # df will be filtered in each img
result_df["source_x"] = source_region["x"]
result_df["source_y"] = source_region["y"]
result_df["source_w"] = source_region["w"]
result_df["source_h"] = source_region["h"]
distances = []
for _, instance in df.iterrows():
source_representation = instance[f"{model_name}_representation"]
target_dims = len(list(target_representation))
source_dims = len(list(source_representation))
if target_dims != source_dims:
raise ValueError(
"Source and target embeddings must have same dimensions but "
+ f"{target_dims}:{source_dims}. Model structure may change"
+ " after pickle created. Delete the {file_name} and re-run."
)
if distance_metric == "cosine":
distance = dst.findCosineDistance(source_representation, target_representation)
elif distance_metric == "euclidean":
distance = dst.findEuclideanDistance(source_representation, target_representation)
elif distance_metric == "euclidean_l2":
distance = dst.findEuclideanDistance(
dst.l2_normalize(source_representation),
dst.l2_normalize(target_representation),
)
else:
raise ValueError(f"invalid distance metric passes - {distance_metric}")
distances.append(distance)
# ---------------------------
result_df[f"{model_name}_{distance_metric}"] = distances
threshold = dst.findThreshold(model_name, distance_metric)
result_df = result_df.drop(columns=[f"{model_name}_representation"])
# pylint: disable=unsubscriptable-object
result_df = result_df[result_df[f"{model_name}_{distance_metric}"] <= threshold]
result_df = result_df.sort_values(
by=[f"{model_name}_{distance_metric}"], ascending=True
).reset_index(drop=True)
resp_obj.append(result_df)
# -----------------------------------
toc = time.time()
if not silent:
logger.info(f"find function lasts {toc - tic} seconds")
return resp_obj
def __list_images(path: str) -> list:
"""
List images in a given path
Args:
path (str): path's location
Returns:
images (list): list of exact image paths
"""
images = []
for r, _, f in os.walk(path):
for file in f:
if file.lower().endswith((".jpg", ".jpeg", ".png")):
exact_path = f"{r}/{file}"
images.append(exact_path)
return images
def __find_bulk_embeddings(
employees: List[str],
model_name: str = "VGG-Face",
target_size: tuple = (224, 224),
detector_backend: str = "opencv",
enforce_detection: bool = True,
align: bool = True,
normalization: str = "base",
silent: bool = False,
):
"""
Find embeddings of a list of images
Args:
employees (list): list of exact image paths
model_name (str): facial recognition model name
target_size (tuple): expected input shape of facial
recognition model
detector_backend (str): face detector model name
enforce_detection (bool): set this to False if you
want to proceed when you cannot detect any face
align (bool): enable or disable alignment of image
before feeding to facial recognition model
normalization (bool): normalization technique
silent (bool): enable or disable informative logging
Returns:
representations (list): pivot list of embeddings with
image name and detected face area's coordinates
"""
representations = []
for employee in tqdm(
employees,
desc="Finding representations",
disable=silent,
):
img_objs = functions.extract_faces(
img=employee,
target_size=target_size,
detector_backend=detector_backend,
grayscale=False,
enforce_detection=enforce_detection,
align=align,
)
for img_content, img_region, _ in img_objs:
embedding_obj = representation.represent(
img_path=img_content,
model_name=model_name,
enforce_detection=enforce_detection,
detector_backend="skip",
align=align,
normalization=normalization,
)
img_representation = embedding_obj[0]["embedding"]
instance = []
instance.append(employee)
instance.append(img_representation)
instance.append(img_region["x"])
instance.append(img_region["y"])
instance.append(img_region["w"])
instance.append(img_region["h"])
representations.append(instance)
return representations

View File

@ -0,0 +1,129 @@
# built-in dependencies
from typing import Any, Dict, List, Union
# 3rd party dependencies
import numpy as np
import cv2
import tensorflow as tf
# project dependencies
from deepface.modules import modeling
from deepface.commons import functions
# conditional dependencies
tf_version = int(tf.__version__.split(".", maxsplit=1)[0])
if tf_version == 2:
from tensorflow.keras.models import Model
else:
from keras.models import Model
def represent(
img_path: Union[str, np.ndarray],
model_name: str = "VGG-Face",
enforce_detection: bool = True,
detector_backend: str = "opencv",
align: bool = True,
normalization: str = "base",
) -> List[Dict[str, Any]]:
"""
This function represents facial images as vectors. The function uses convolutional neural
networks models to generate vector embeddings.
Parameters:
img_path (string): exact image path. Alternatively, numpy array (BGR) or based64
encoded images could be passed. Source image can have many faces. Then, result will
be the size of number of faces appearing in the source image.
model_name (string): VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID, Dlib,
ArcFace, SFace
enforce_detection (boolean): If no face could not be detected in an image, then this
function will return exception by default. Set this to False not to have this exception.
This might be convenient for low resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8. A special value `skip` could be used to skip face-detection
and only encode the given image.
align (boolean): alignment according to the eye positions.
normalization (string): normalize the input image before feeding to model
Returns:
Represent function returns a list of object, each object has fields as follows:
{
// Multidimensional vector
// The number of dimensions is changing based on the reference model.
// E.g. FaceNet returns 128 dimensional vector;
// VGG-Face returns 2622 dimensional vector.
"embedding": np.array,
// Detected Facial-Area by Face detection in dict format.
// (x, y) is left-corner point, and (w, h) is the width and height
// If `detector_backend` == `skip`, it is the full image area and nonsense.
"facial_area": dict{"x": int, "y": int, "w": int, "h": int},
// Face detection confidence.
// If `detector_backend` == `skip`, will be 0 and nonsense.
"face_confidence": float
}
"""
resp_objs = []
model = modeling.build_model(model_name)
# ---------------------------------
# we have run pre-process in verification. so, this can be skipped if it is coming from verify.
target_size = functions.find_target_size(model_name=model_name)
if detector_backend != "skip":
img_objs = functions.extract_faces(
img=img_path,
target_size=target_size,
detector_backend=detector_backend,
grayscale=False,
enforce_detection=enforce_detection,
align=align,
)
else: # skip
# Try load. If load error, will raise exception internal
img, _ = functions.load_image(img_path)
# --------------------------------
if len(img.shape) == 4:
img = img[0] # e.g. (1, 224, 224, 3) to (224, 224, 3)
if len(img.shape) == 3:
img = cv2.resize(img, target_size)
img = np.expand_dims(img, axis=0)
# when called from verify, this is already normalized. But needed when user given.
if img.max() > 1:
img = (img.astype(np.float32) / 255.0).astype(np.float32)
# --------------------------------
# make dummy region and confidence to keep compatibility with `extract_faces`
img_region = {"x": 0, "y": 0, "w": img.shape[1], "h": img.shape[2]}
img_objs = [(img, img_region, 0)]
# ---------------------------------
for img, region, confidence in img_objs:
# custom normalization
img = functions.normalize_input(img=img, normalization=normalization)
# represent
# if "keras" in str(type(model)):
if isinstance(model, Model):
# model.predict causes memory issue when it is called in a for loop
# embedding = model.predict(img, verbose=0)[0].tolist()
embedding = model(img, training=False).numpy()[0].tolist()
# if you still get verbose logging. try call
# - `tf.keras.utils.disable_interactive_logging()`
# in your main program
else:
# SFace and Dlib are not keras models and no verbose arguments
embedding = model.predict(img)[0].tolist()
resp_obj = {}
resp_obj["embedding"] = embedding
resp_obj["facial_area"] = region
resp_obj["face_confidence"] = confidence
resp_objs.append(resp_obj)
return resp_objs

View File

@ -0,0 +1,151 @@
# built-in dependencies
import time
from typing import Any, Dict, Union
# 3rd party dependencies
import numpy as np
# project dependencies
from deepface.commons import functions, distance as dst
from deepface.modules import representation
def verify(
img1_path: Union[str, np.ndarray],
img2_path: Union[str, np.ndarray],
model_name: str = "VGG-Face",
detector_backend: str = "opencv",
distance_metric: str = "cosine",
enforce_detection: bool = True,
align: bool = True,
normalization: str = "base",
) -> Dict[str, Any]:
"""
This function verifies an image pair is same person or different persons. In the background,
verification function represents facial images as vectors and then calculates the similarity
between those vectors. Vectors of same person images should have more similarity (or less
distance) than vectors of different persons.
Parameters:
img1_path, img2_path: exact image path as string. numpy array (BGR) or based64 encoded
images are also welcome. If one of pair has more than one face, then we will compare the
face pair with max similarity.
model_name (str): VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID, Dlib
, ArcFace and SFace
distance_metric (string): cosine, euclidean, euclidean_l2
enforce_detection (boolean): If no face could not be detected in an image, then this
function will return exception by default. Set this to False not to have this exception.
This might be convenient for low resolution images.
detector_backend (string): set face detector backend to opencv, retinaface, mtcnn, ssd,
dlib, mediapipe or yolov8.
align (boolean): alignment according to the eye positions.
normalization (string): normalize the input image before feeding to model
Returns:
Verify function returns a dictionary.
{
"verified": True
, "distance": 0.2563
, "max_threshold_to_verify": 0.40
, "model": "VGG-Face"
, "similarity_metric": "cosine"
, 'facial_areas': {
'img1': {'x': 345, 'y': 211, 'w': 769, 'h': 769},
'img2': {'x': 318, 'y': 534, 'w': 779, 'h': 779}
}
, "time": 2
}
"""
tic = time.time()
# --------------------------------
target_size = functions.find_target_size(model_name=model_name)
# img pairs might have many faces
img1_objs = functions.extract_faces(
img=img1_path,
target_size=target_size,
detector_backend=detector_backend,
grayscale=False,
enforce_detection=enforce_detection,
align=align,
)
img2_objs = functions.extract_faces(
img=img2_path,
target_size=target_size,
detector_backend=detector_backend,
grayscale=False,
enforce_detection=enforce_detection,
align=align,
)
# --------------------------------
distances = []
regions = []
# now we will find the face pair with minimum distance
for img1_content, img1_region, _ in img1_objs:
for img2_content, img2_region, _ in img2_objs:
img1_embedding_obj = representation.represent(
img_path=img1_content,
model_name=model_name,
enforce_detection=enforce_detection,
detector_backend="skip",
align=align,
normalization=normalization,
)
img2_embedding_obj = representation.represent(
img_path=img2_content,
model_name=model_name,
enforce_detection=enforce_detection,
detector_backend="skip",
align=align,
normalization=normalization,
)
img1_representation = img1_embedding_obj[0]["embedding"]
img2_representation = img2_embedding_obj[0]["embedding"]
if distance_metric == "cosine":
distance = dst.findCosineDistance(img1_representation, img2_representation)
elif distance_metric == "euclidean":
distance = dst.findEuclideanDistance(img1_representation, img2_representation)
elif distance_metric == "euclidean_l2":
distance = dst.findEuclideanDistance(
dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation)
)
else:
raise ValueError("Invalid distance_metric passed - ", distance_metric)
distances.append(distance)
regions.append((img1_region, img2_region))
# -------------------------------
threshold = dst.findThreshold(model_name, distance_metric)
distance = min(distances) # best distance
facial_areas = regions[np.argmin(distances)]
toc = time.time()
# pylint: disable=simplifiable-if-expression
resp_obj = {
"verified": True if distance <= threshold else False,
"distance": distance,
"threshold": threshold,
"model": model_name,
"detector_backend": detector_backend,
"similarity_metric": distance_metric,
"facial_areas": {"img1": facial_areas[0], "img2": facial_areas[1]},
"time": round(toc - tic, 2),
}
return resp_obj

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 504 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 210 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 259 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 350 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 457 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 219 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 502 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 736 KiB

View File

@ -0,0 +1,14 @@
numpy>=1.14.0
pandas>=0.23.4
gdown>=3.10.1
tqdm>=4.30.0
Pillow>=5.2.0
opencv-python>=4.5.5.64
tensorflow>=1.9.0
keras>=2.2.0
Flask>=1.1.2
mtcnn>=0.1.0
retina-face>=0.0.1
fire>=0.4.0
gunicorn>=20.1.0
Deprecated>=1.2.13

View File

@ -0,0 +1,5 @@
opencv-contrib-python>=4.3.0.36
mediapipe>=0.8.7.3
dlib>=19.20.0
ultralytics>=8.0.122
facenet-pytorch>=2.5.3

View File

@ -0,0 +1,27 @@
# Dockerfile is in the root
cd ..
# start docker
# sudo service docker start
# list current docker packages
# docker container ls -a
# delete existing deepface packages
# docker rm -f $(docker ps -a -q --filter "ancestor=deepface")
# build deepface image
docker build -t deepface .
# copy weights from your local
# docker cp ~/.deepface/weights/. <CONTAINER_ID>:/root/.deepface/weights/
# run image
docker run --net="host" deepface
# to access the inside of docker image when it is in running status
# docker exec -it <CONTAINER_ID> /bin/sh
# healthcheck
# sleep 3s
# curl localhost:5000

View File

@ -0,0 +1,11 @@
cd ..
echo "deleting existing release related files"
rm -rf dist/*
rm -rf build/*
echo "creating a package for current release - pypi compatible"
python setup.py sdist bdist_wheel
echo "pushing the release to pypi"
python -m twine upload dist/*

View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
cd ../api
gunicorn --workers=1 --timeout=3600 --bind=0.0.0.0:5000 "app:create_app()"

View File

@ -0,0 +1,30 @@
import setuptools
with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
with open("requirements.txt", "r", encoding="utf-8") as f:
requirements = f.read().split("\n")
setuptools.setup(
name="deepface",
version="0.0.82",
author="Sefik Ilkin Serengil",
author_email="serengil@gmail.com",
description="A Lightweight Face Recognition and Facial Attribute Analysis Framework (Age, Gender, Emotion, Race) for Python",
data_files=[("", ["README.md", "requirements.txt"])],
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/serengil/deepface",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
entry_points={
"console_scripts": ["deepface = deepface.DeepFace:cli"],
},
python_requires=">=3.5.5",
install_requires=requirements,
)

Binary file not shown.

After

Width:  |  Height:  |  Size: 923 KiB

View File

@ -0,0 +1,281 @@
file_x,file_y,decision,VGG-Face_cosine,VGG-Face_euclidean,VGG-Face_euclidean_l2,Facenet_cosine,Facenet_euclidean,Facenet_euclidean_l2,OpenFace_cosine,OpenFace_euclidean_l2,DeepFace_cosine,DeepFace_euclidean,DeepFace_euclidean_l2
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img39.jpg,Yes,0.2057,0.389,0.6414,0.1601,6.8679,0.5658,0.5925,1.0886,0.2554,61.3336,0.7147
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img40.jpg,Yes,0.2117,0.3179,0.6508,0.2739,8.9049,0.7402,0.396,0.8899,0.2685,63.3747,0.7328
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img41.jpg,Yes,0.1073,0.2482,0.4632,0.1257,6.1593,0.5014,0.7157,1.1964,0.2452,60.3454,0.7002
deepface/tests/dataset/img39.jpg,deepface/tests/dataset/img40.jpg,Yes,0.2991,0.4567,0.7734,0.3134,9.3798,0.7917,0.4941,0.9941,0.1703,45.1688,0.5836
deepface/tests/dataset/img39.jpg,deepface/tests/dataset/img41.jpg,Yes,0.1666,0.3542,0.5772,0.1502,6.6491,0.5481,0.2381,0.6901,0.2194,50.4356,0.6624
deepface/tests/dataset/img40.jpg,deepface/tests/dataset/img41.jpg,Yes,0.1706,0.3066,0.5841,0.2017,7.6423,0.6352,0.567,1.0649,0.2423,54.2499,0.6961
deepface/tests/dataset/img3.jpg,deepface/tests/dataset/img12.jpg,Yes,0.2533,0.5199,0.7118,0.4062,11.2632,0.9014,0.1908,0.6178,0.2337,58.8794,0.6837
deepface/tests/dataset/img3.jpg,deepface/tests/dataset/img53.jpg,Yes,0.1655,0.3567,0.5754,0.184,7.5388,0.6066,0.1465,0.5412,0.243,55.2642,0.6971
deepface/tests/dataset/img3.jpg,deepface/tests/dataset/img54.jpg,Yes,0.1982,0.4739,0.6297,0.406,11.0618,0.9011,0.1132,0.4758,0.1824,49.7875,0.6041
deepface/tests/dataset/img3.jpg,deepface/tests/dataset/img55.jpg,Yes,0.1835,0.3742,0.6057,0.1366,6.4168,0.5227,0.1755,0.5924,0.1697,55.179,0.5825
deepface/tests/dataset/img3.jpg,deepface/tests/dataset/img56.jpg,Yes,0.1652,0.4005,0.5748,0.1833,7.3432,0.6054,0.1803,0.6005,0.2061,59.007,0.642
deepface/tests/dataset/img12.jpg,deepface/tests/dataset/img53.jpg,Yes,0.372,0.6049,0.8626,0.3933,11.1382,0.8869,0.1068,0.4621,0.1633,48.5516,0.5715
deepface/tests/dataset/img12.jpg,deepface/tests/dataset/img54.jpg,Yes,0.2153,0.5145,0.6561,0.2694,9.1155,0.734,0.1943,0.6234,0.1881,52.7146,0.6133
deepface/tests/dataset/img12.jpg,deepface/tests/dataset/img55.jpg,Yes,0.3551,0.5941,0.8428,0.4726,12.0647,0.9722,0.1054,0.4591,0.1265,48.2432,0.5029
deepface/tests/dataset/img12.jpg,deepface/tests/dataset/img56.jpg,Yes,0.2826,0.565,0.7518,0.4761,11.9569,0.9758,0.1364,0.5224,0.1908,57.6735,0.6177
deepface/tests/dataset/img53.jpg,deepface/tests/dataset/img54.jpg,Yes,0.3363,0.593,0.8202,0.4627,11.8744,0.962,0.1964,0.6267,0.174,46.6212,0.5898
deepface/tests/dataset/img53.jpg,deepface/tests/dataset/img55.jpg,Yes,0.187,0.3313,0.6116,0.1625,7.0394,0.5701,0.1312,0.5123,0.1439,52.3132,0.5365
deepface/tests/dataset/img53.jpg,deepface/tests/dataset/img56.jpg,Yes,0.1385,0.3776,0.5263,0.141,6.4913,0.5311,0.1285,0.507,0.2005,58.0586,0.6332
deepface/tests/dataset/img54.jpg,deepface/tests/dataset/img55.jpg,Yes,0.3124,0.5756,0.7905,0.4033,10.944,0.8981,0.1738,0.5896,0.1351,49.8255,0.5198
deepface/tests/dataset/img54.jpg,deepface/tests/dataset/img56.jpg,Yes,0.2571,0.5473,0.717,0.3912,10.6329,0.8846,0.1802,0.6002,0.1648,53.0881,0.574
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img56.jpg,Yes,0.2217,0.4543,0.6658,0.1433,6.4387,0.5353,0.1677,0.5792,0.1505,53.6812,0.5486
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img2.jpg,Yes,0.2342,0.5033,0.6844,0.2508,8.2369,0.7082,0.0844,0.4109,0.2417,64.2748,0.6952
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img4.jpg,Yes,0.2051,0.3916,0.6405,0.2766,8.7946,0.7437,0.1662,0.5766,0.2292,64.7785,0.6771
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img5.jpg,Yes,0.2963,0.3948,0.7699,0.2696,8.4689,0.7343,0.0965,0.4393,0.2306,71.6647,0.679
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img6.jpg,Yes,0.254,0.4464,0.7128,0.2164,7.7171,0.6579,0.0691,0.3718,0.2365,64.7594,0.6877
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img7.jpg,Yes,0.3104,0.4764,0.7879,0.2112,7.5718,0.65,0.1027,0.4531,0.2385,61.371,0.6906
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img10.jpg,Yes,0.3363,0.5448,0.8202,0.2129,7.6484,0.6525,0.0661,0.3635,0.2472,65.0668,0.7031
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img11.jpg,Yes,0.3083,0.5416,0.7852,0.2042,7.6195,0.639,0.1626,0.5703,0.2001,61.3824,0.6326
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img4.jpg,Yes,0.1397,0.3961,0.5285,0.1957,7.351,0.6256,0.2497,0.7066,0.1349,51.5853,0.5194
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img5.jpg,Yes,0.1995,0.482,0.6317,0.1574,6.4195,0.561,0.1333,0.5164,0.1583,60.6365,0.5627
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img6.jpg,Yes,0.0908,0.3251,0.4261,0.0787,4.625,0.3969,0.0632,0.3556,0.0756,38.218,0.3888
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img7.jpg,Yes,0.2,0.4664,0.6325,0.1642,6.6261,0.5731,0.1049,0.4581,0.098,42.1113,0.4428
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img10.jpg,Yes,0.2077,0.4862,0.6444,0.1593,6.5693,0.5644,0.0589,0.3431,0.1118,45.9168,0.4729
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img11.jpg,Yes,0.2349,0.5235,0.6854,0.1869,7.2485,0.6114,0.1029,0.4536,0.1548,55.617,0.5564
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img5.jpg,Yes,0.1991,0.3869,0.6311,0.1199,5.7256,0.4898,0.2891,0.7604,0.1797,64.7925,0.5995
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img6.jpg,Yes,0.1937,0.4095,0.6224,0.1772,7.0495,0.5954,0.2199,0.6632,0.1788,59.9202,0.598
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img7.jpg,Yes,0.245,0.4526,0.7,0.1663,6.7868,0.5767,0.3435,0.8289,0.1971,61.177,0.6279
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img10.jpg,Yes,0.1882,0.4274,0.6136,0.1304,6.0445,0.5107,0.2052,0.6406,0.1239,49.4937,0.4979
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img11.jpg,Yes,0.2569,0.5093,0.7168,0.1909,7.4277,0.618,0.2874,0.7582,0.1737,59.8839,0.5894
deepface/tests/dataset/img5.jpg,deepface/tests/dataset/img6.jpg,Yes,0.1858,0.3915,0.6095,0.1818,6.967,0.6029,0.13,0.5099,0.1742,63.6179,0.5903
deepface/tests/dataset/img5.jpg,deepface/tests/dataset/img7.jpg,Yes,0.2639,0.4391,0.7264,0.1754,6.7894,0.5923,0.1174,0.4846,0.1523,59.6056,0.5519
deepface/tests/dataset/img5.jpg,deepface/tests/dataset/img10.jpg,Yes,0.2013,0.4449,0.6344,0.1143,5.525,0.478,0.1228,0.4957,0.1942,66.7805,0.6232
deepface/tests/dataset/img5.jpg,deepface/tests/dataset/img11.jpg,Yes,0.3348,0.5599,0.8183,0.1975,7.4008,0.6285,0.2071,0.6436,0.1692,63.0817,0.5818
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img7.jpg,Yes,0.192,0.4085,0.6196,0.1275,5.892,0.505,0.1004,0.4482,0.094,42.0465,0.4335
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img10.jpg,Yes,0.214,0.4593,0.6542,0.1237,5.8374,0.4974,0.0517,0.3216,0.11,46.1197,0.4691
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img11.jpg,Yes,0.2755,0.5319,0.7423,0.1772,7.1072,0.5953,0.1383,0.526,0.1771,59.9849,0.5951
deepface/tests/dataset/img7.jpg,deepface/tests/dataset/img10.jpg,Yes,0.3425,0.5729,0.8276,0.1708,6.8133,0.5845,0.0956,0.4374,0.1552,52.8909,0.5571
deepface/tests/dataset/img7.jpg,deepface/tests/dataset/img11.jpg,Yes,0.2912,0.5417,0.7632,0.2449,8.3025,0.6998,0.148,0.544,0.1894,60.469,0.6154
deepface/tests/dataset/img10.jpg,deepface/tests/dataset/img11.jpg,Yes,0.2535,0.5258,0.712,0.1371,6.2509,0.5237,0.0609,0.349,0.1851,60.8244,0.6085
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img19.jpg,Yes,0.1043,0.3254,0.4567,0.1248,6.2382,0.4996,0.2563,0.7159,0.1712,60.1675,0.5851
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img67.jpg,Yes,0.2197,0.4691,0.6629,0.2387,8.7124,0.6909,0.3072,0.7838,0.1839,58.9528,0.6065
deepface/tests/dataset/img19.jpg,deepface/tests/dataset/img67.jpg,Yes,0.1466,0.3965,0.5416,0.1321,6.5557,0.514,0.1504,0.5485,0.1517,55.8044,0.5508
deepface/tests/dataset/img20.jpg,deepface/tests/dataset/img21.jpg,Yes,0.0641,0.2068,0.3581,0.1052,5.4253,0.4586,0.1118,0.4729,0.2209,58.7235,0.6646
deepface/tests/dataset/img34.jpg,deepface/tests/dataset/img35.jpg,Yes,0.0959,0.2628,0.4381,0.2538,8.7003,0.7124,0.3727,0.8634,0.3244,78.4397,0.8055
deepface/tests/dataset/img34.jpg,deepface/tests/dataset/img36.jpg,Yes,0.1553,0.2918,0.5573,0.1861,7.5793,0.6101,0.399,0.8933,0.2923,61.625,0.7646
deepface/tests/dataset/img34.jpg,deepface/tests/dataset/img37.jpg,Yes,0.104,0.2651,0.4562,0.1192,6.0818,0.4882,0.4158,0.912,0.2853,62.1217,0.7554
deepface/tests/dataset/img35.jpg,deepface/tests/dataset/img36.jpg,Yes,0.2322,0.3945,0.6814,0.2049,7.6366,0.6401,0.38,0.8717,0.2991,74.4219,0.7735
deepface/tests/dataset/img35.jpg,deepface/tests/dataset/img37.jpg,Yes,0.1684,0.3516,0.5804,0.186,7.2991,0.6099,0.1662,0.5766,0.164,58.1125,0.5727
deepface/tests/dataset/img36.jpg,deepface/tests/dataset/img37.jpg,Yes,0.1084,0.2715,0.4655,0.1338,6.3075,0.5173,0.2909,0.7627,0.2687,54.7311,0.7331
deepface/tests/dataset/img22.jpg,deepface/tests/dataset/img23.jpg,Yes,0.3637,0.4569,0.8528,0.3501,9.9752,0.8368,0.1651,0.5746,0.1649,42.2178,0.5742
deepface/tests/dataset/img13.jpg,deepface/tests/dataset/img14.jpg,Yes,0.086,0.3384,0.4148,0.1104,5.3711,0.47,0.0952,0.4363,0.2043,61.8532,0.6392
deepface/tests/dataset/img13.jpg,deepface/tests/dataset/img15.jpg,Yes,0.1879,0.5589,0.6131,0.2317,7.9283,0.6808,0.3202,0.8003,0.3665,81.975,0.8562
deepface/tests/dataset/img13.jpg,deepface/tests/dataset/img57.jpg,Yes,0.1204,0.3952,0.4907,0.1897,7.1445,0.616,0.4599,0.9591,0.3266,82.6217,0.8082
deepface/tests/dataset/img13.jpg,deepface/tests/dataset/img58.jpg,Yes,0.1748,0.524,0.5913,0.2264,7.7484,0.6729,0.5006,1.0006,0.3476,75.6494,0.8338
deepface/tests/dataset/img14.jpg,deepface/tests/dataset/img15.jpg,Yes,0.1969,0.571,0.6275,0.2322,7.8197,0.6815,0.3409,0.8257,0.4076,89.3521,0.9029
deepface/tests/dataset/img14.jpg,deepface/tests/dataset/img57.jpg,Yes,0.1815,0.4206,0.6025,0.128,5.7838,0.5059,0.4251,0.9221,0.3284,84.7328,0.8105
deepface/tests/dataset/img14.jpg,deepface/tests/dataset/img58.jpg,Yes,0.2071,0.5609,0.6436,0.2125,7.384,0.6519,0.4993,0.9993,0.3848,83.0627,0.8772
deepface/tests/dataset/img15.jpg,deepface/tests/dataset/img57.jpg,Yes,0.198,0.5753,0.6293,0.2073,7.5025,0.6439,0.3957,0.8896,0.3881,91.551,0.881
deepface/tests/dataset/img15.jpg,deepface/tests/dataset/img58.jpg,Yes,0.1109,0.4424,0.4709,0.1106,5.4445,0.4702,0.2815,0.7503,0.4153,85.5012,0.9114
deepface/tests/dataset/img57.jpg,deepface/tests/dataset/img58.jpg,Yes,0.1581,0.5045,0.5624,0.1452,6.2094,0.5389,0.213,0.6528,0.2184,67.7741,0.6609
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img30.jpg,Yes,0.142,0.28,0.5329,0.1759,7.1649,0.5931,0.3237,0.8046,0.272,59.7856,0.7375
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img31.jpg,Yes,0.1525,0.2777,0.5523,0.1588,6.8613,0.5636,0.5027,1.0027,0.2,49.2171,0.6324
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img32.jpg,Yes,0.1807,0.481,0.6011,0.1997,7.8571,0.632,0.4602,0.9594,0.3084,60.7837,0.7854
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img33.jpg,Yes,0.1757,0.3177,0.5927,0.2406,8.3798,0.6937,0.3446,0.8302,0.1679,47.9061,0.5795
deepface/tests/dataset/img30.jpg,deepface/tests/dataset/img31.jpg,Yes,0.1141,0.2453,0.4776,0.1654,6.8805,0.5751,0.3189,0.7986,0.1897,51.344,0.6159
deepface/tests/dataset/img30.jpg,deepface/tests/dataset/img32.jpg,Yes,0.1567,0.4575,0.5597,0.1757,7.2731,0.5929,0.1712,0.5851,0.242,57.849,0.6957
deepface/tests/dataset/img30.jpg,deepface/tests/dataset/img33.jpg,Yes,0.1548,0.2997,0.5565,0.2074,7.6356,0.644,0.1744,0.5906,0.2601,61.9643,0.7213
deepface/tests/dataset/img31.jpg,deepface/tests/dataset/img32.jpg,Yes,0.1402,0.4725,0.5295,0.1009,5.5583,0.4493,0.2098,0.6478,0.2023,51.0814,0.6361
deepface/tests/dataset/img31.jpg,deepface/tests/dataset/img33.jpg,Yes,0.0895,0.2296,0.4232,0.1873,7.3261,0.6121,0.1871,0.6118,0.229,56.6939,0.6768
deepface/tests/dataset/img32.jpg,deepface/tests/dataset/img33.jpg,Yes,0.2035,0.4953,0.638,0.2415,8.5176,0.6949,0.2426,0.6965,0.2768,62.1742,0.744
deepface/tests/dataset/img8.jpg,deepface/tests/dataset/img9.jpg,Yes,0.3147,0.45,0.7933,0.1976,7.3714,0.6287,0.0997,0.4466,0.1695,48.8942,0.5822
deepface/tests/dataset/img8.jpg,deepface/tests/dataset/img47.jpg,Yes,0.3638,0.4564,0.853,0.1976,7.2952,0.6287,0.0931,0.4314,0.1869,54.8324,0.6114
deepface/tests/dataset/img8.jpg,deepface/tests/dataset/img48.jpg,Yes,0.3068,0.442,0.7834,0.2593,8.2334,0.7201,0.1319,0.5136,0.2194,55.6994,0.6624
deepface/tests/dataset/img8.jpg,deepface/tests/dataset/img49.jpg,Yes,0.2353,0.4246,0.686,0.1797,6.8592,0.5996,0.1472,0.5426,0.1904,57.1813,0.617
deepface/tests/dataset/img8.jpg,deepface/tests/dataset/img50.jpg,Yes,0.3583,0.5144,0.8465,0.24,8.2435,0.6928,0.132,0.5138,0.138,40.4616,0.5253
deepface/tests/dataset/img8.jpg,deepface/tests/dataset/img51.jpg,Yes,0.3446,0.4498,0.8301,0.1666,6.7177,0.5772,0.1413,0.5317,0.1656,46.6621,0.5756
deepface/tests/dataset/img9.jpg,deepface/tests/dataset/img47.jpg,Yes,0.3153,0.4374,0.7941,0.1772,6.9625,0.5953,0.1591,0.5641,0.1795,54.801,0.5992
deepface/tests/dataset/img9.jpg,deepface/tests/dataset/img48.jpg,Yes,0.3537,0.4845,0.8411,0.1723,6.7796,0.5871,0.1234,0.4969,0.1795,52.6507,0.5992
deepface/tests/dataset/img9.jpg,deepface/tests/dataset/img49.jpg,Yes,0.2072,0.4029,0.6437,0.1954,7.2154,0.6251,0.1529,0.553,0.1311,48.2847,0.5121
deepface/tests/dataset/img9.jpg,deepface/tests/dataset/img50.jpg,Yes,0.2662,0.4509,0.7296,0.2576,8.5935,0.7177,0.1531,0.5533,0.1205,41.6412,0.491
deepface/tests/dataset/img9.jpg,deepface/tests/dataset/img51.jpg,Yes,0.3282,0.4507,0.8102,0.2371,8.0755,0.6887,0.1873,0.612,0.1817,51.7388,0.6029
deepface/tests/dataset/img47.jpg,deepface/tests/dataset/img48.jpg,Yes,0.345,0.4542,0.8307,0.1613,6.4777,0.5679,0.1419,0.5328,0.1649,52.6864,0.5742
deepface/tests/dataset/img47.jpg,deepface/tests/dataset/img49.jpg,Yes,0.257,0.4382,0.717,0.1944,7.1101,0.6236,0.1089,0.4667,0.2415,66.6307,0.695
deepface/tests/dataset/img47.jpg,deepface/tests/dataset/img50.jpg,Yes,0.1844,0.3737,0.6073,0.215,7.7872,0.6558,0.1817,0.6029,0.2052,57.2133,0.6406
deepface/tests/dataset/img47.jpg,deepface/tests/dataset/img51.jpg,Yes,0.1979,0.3274,0.6291,0.1303,5.926,0.5106,0.0939,0.4334,0.1209,44.911,0.4918
deepface/tests/dataset/img48.jpg,deepface/tests/dataset/img49.jpg,Yes,0.2917,0.4744,0.7639,0.232,7.6321,0.6812,0.1067,0.462,0.2183,61.9241,0.6608
deepface/tests/dataset/img48.jpg,deepface/tests/dataset/img50.jpg,Yes,0.3985,0.5478,0.8927,0.2745,8.6847,0.7409,0.2245,0.6701,0.2181,55.6337,0.6605
deepface/tests/dataset/img48.jpg,deepface/tests/dataset/img51.jpg,Yes,0.3408,0.4563,0.8255,0.1586,6.4477,0.5633,0.1734,0.5888,0.2082,55.6445,0.6452
deepface/tests/dataset/img49.jpg,deepface/tests/dataset/img50.jpg,Yes,0.2073,0.4183,0.6439,0.2437,8.1889,0.6982,0.1738,0.5896,0.1949,57.7545,0.6243
deepface/tests/dataset/img49.jpg,deepface/tests/dataset/img51.jpg,Yes,0.2694,0.4491,0.7341,0.2076,7.3716,0.6444,0.1414,0.5318,0.2283,62.518,0.6758
deepface/tests/dataset/img50.jpg,deepface/tests/dataset/img51.jpg,Yes,0.2505,0.4295,0.7079,0.2299,8.07,0.6781,0.1894,0.6155,0.1715,47.5665,0.5857
deepface/tests/dataset/img16.jpg,deepface/tests/dataset/img17.jpg,Yes,0.2545,0.3759,0.7135,0.1493,6.5661,0.5465,0.2749,0.7414,0.1528,47.8128,0.5528
deepface/tests/dataset/img16.jpg,deepface/tests/dataset/img59.jpg,Yes,0.1796,0.4352,0.5993,0.3095,9.6361,0.7868,0.4173,0.9136,0.247,61.4867,0.7028
deepface/tests/dataset/img16.jpg,deepface/tests/dataset/img61.jpg,Yes,0.1779,0.3234,0.5965,0.1863,7.2985,0.6105,0.1407,0.5305,0.1643,53.2032,0.5732
deepface/tests/dataset/img16.jpg,deepface/tests/dataset/img62.jpg,Yes,0.106,0.2509,0.4604,0.2243,8.1191,0.6698,0.3857,0.8783,0.1953,57.434,0.6249
deepface/tests/dataset/img17.jpg,deepface/tests/dataset/img59.jpg,Yes,0.2519,0.5106,0.7099,0.2846,9.3099,0.7544,0.3877,0.8806,0.2994,62.5416,0.7739
deepface/tests/dataset/img17.jpg,deepface/tests/dataset/img61.jpg,Yes,0.2507,0.3495,0.708,0.1992,7.6132,0.6313,0.1867,0.6111,0.2101,58.2095,0.6482
deepface/tests/dataset/img17.jpg,deepface/tests/dataset/img62.jpg,Yes,0.2533,0.3415,0.7118,0.2672,8.9292,0.731,0.3356,0.8193,0.252,62.3621,0.7099
deepface/tests/dataset/img59.jpg,deepface/tests/dataset/img61.jpg,Yes,0.192,0.4543,0.6196,0.4417,11.5466,0.9399,0.3558,0.8435,0.1808,54.8373,0.6014
deepface/tests/dataset/img59.jpg,deepface/tests/dataset/img62.jpg,Yes,0.1123,0.3893,0.4738,0.2974,9.5874,0.7713,0.5393,1.0386,0.1934,55.9836,0.6219
deepface/tests/dataset/img61.jpg,deepface/tests/dataset/img62.jpg,Yes,0.1251,0.253,0.5002,0.2245,8.1525,0.6701,0.4072,0.9024,0.1757,55.867,0.5928
deepface/tests/dataset/img26.jpg,deepface/tests/dataset/img27.jpg,Yes,0.3059,0.5758,0.7822,0.3444,9.7537,0.8299,0.1815,0.6026,0.2396,69.4496,0.6922
deepface/tests/dataset/img26.jpg,deepface/tests/dataset/img28.jpg,Yes,0.343,0.5503,0.8282,0.3556,10.2896,0.8433,0.1662,0.5766,0.205,60.0105,0.6403
deepface/tests/dataset/img26.jpg,deepface/tests/dataset/img42.jpg,Yes,0.3852,0.542,0.8778,0.3278,9.7855,0.8097,0.2831,0.7524,0.2523,66.2702,0.7104
deepface/tests/dataset/img26.jpg,deepface/tests/dataset/img43.jpg,Yes,0.3254,0.5271,0.8067,0.2825,8.887,0.7517,0.2876,0.7585,0.3443,79.1342,0.8299
deepface/tests/dataset/img26.jpg,deepface/tests/dataset/img44.jpg,Yes,0.3645,0.5029,0.8539,0.2248,7.9975,0.6706,0.2646,0.7274,0.2572,68.2216,0.7173
deepface/tests/dataset/img26.jpg,deepface/tests/dataset/img45.jpg,Yes,0.283,0.4775,0.7523,0.2537,8.5109,0.7124,0.3277,0.8096,0.2726,70.5843,0.7384
deepface/tests/dataset/img26.jpg,deepface/tests/dataset/img46.jpg,Yes,0.447,0.5967,0.9456,0.4372,11.0907,0.9351,0.3544,0.8419,0.3079,73.7249,0.7848
deepface/tests/dataset/img27.jpg,deepface/tests/dataset/img28.jpg,Yes,0.2847,0.5707,0.7546,0.2178,7.8688,0.6601,0.1205,0.491,0.232,66.1474,0.6811
deepface/tests/dataset/img27.jpg,deepface/tests/dataset/img42.jpg,Yes,0.328,0.5946,0.8099,0.2829,8.8485,0.7523,0.3721,0.8627,0.2376,66.8304,0.6893
deepface/tests/dataset/img27.jpg,deepface/tests/dataset/img43.jpg,Yes,0.3781,0.65,0.8696,0.2827,8.6093,0.7519,0.2004,0.633,0.2924,75.1537,0.7647
deepface/tests/dataset/img27.jpg,deepface/tests/dataset/img44.jpg,Yes,0.3385,0.5968,0.8229,0.2597,8.3408,0.7207,0.2941,0.7669,0.2314,66.8603,0.6803
deepface/tests/dataset/img27.jpg,deepface/tests/dataset/img45.jpg,Yes,0.2302,0.5087,0.6785,0.147,6.2958,0.5422,0.2088,0.6463,0.2035,63.0117,0.6379
deepface/tests/dataset/img27.jpg,deepface/tests/dataset/img46.jpg,Yes,0.3461,0.6141,0.832,0.388,10.1318,0.881,0.264,0.7266,0.2241,65.3424,0.6694
deepface/tests/dataset/img28.jpg,deepface/tests/dataset/img42.jpg,Yes,0.2442,0.4668,0.6988,0.1991,7.7026,0.631,0.2848,0.7547,0.2583,62.2885,0.7187
deepface/tests/dataset/img28.jpg,deepface/tests/dataset/img43.jpg,Yes,0.2159,0.4542,0.657,0.2239,8.0122,0.6692,0.2194,0.6624,0.2833,67.7766,0.7527
deepface/tests/dataset/img28.jpg,deepface/tests/dataset/img44.jpg,Yes,0.2802,0.4883,0.7486,0.1697,7.0317,0.5826,0.2753,0.742,0.2378,61.8227,0.6897
deepface/tests/dataset/img28.jpg,deepface/tests/dataset/img45.jpg,Yes,0.3044,0.5286,0.7803,0.1768,7.1867,0.5946,0.267,0.7307,0.2683,66.1764,0.7326
deepface/tests/dataset/img28.jpg,deepface/tests/dataset/img46.jpg,Yes,0.426,0.6222,0.923,0.3338,9.8004,0.817,0.2481,0.7044,0.3072,68.9752,0.7838
deepface/tests/dataset/img42.jpg,deepface/tests/dataset/img43.jpg,Yes,0.2018,0.4174,0.6353,0.2418,8.227,0.6954,0.1678,0.5794,0.1483,49.1175,0.5446
deepface/tests/dataset/img42.jpg,deepface/tests/dataset/img44.jpg,Yes,0.1685,0.3458,0.5805,0.119,5.8252,0.4879,0.2432,0.6975,0.0957,39.352,0.4375
deepface/tests/dataset/img42.jpg,deepface/tests/dataset/img45.jpg,Yes,0.2004,0.4027,0.6331,0.1378,6.2772,0.5251,0.1982,0.6296,0.1742,53.3531,0.5903
deepface/tests/dataset/img42.jpg,deepface/tests/dataset/img46.jpg,Yes,0.2253,0.4245,0.6713,0.1946,7.4093,0.6239,0.1761,0.5934,0.1568,49.1856,0.5601
deepface/tests/dataset/img43.jpg,deepface/tests/dataset/img44.jpg,Yes,0.2049,0.4137,0.6402,0.2238,7.7899,0.6691,0.1748,0.5912,0.1553,51.4113,0.5573
deepface/tests/dataset/img43.jpg,deepface/tests/dataset/img45.jpg,Yes,0.2298,0.4524,0.6779,0.2281,7.8811,0.6754,0.0531,0.3257,0.1801,55.7173,0.6001
deepface/tests/dataset/img43.jpg,deepface/tests/dataset/img46.jpg,Yes,0.3731,0.5738,0.8638,0.3741,10.0121,0.865,0.1394,0.5281,0.2184,60.1165,0.6609
deepface/tests/dataset/img44.jpg,deepface/tests/dataset/img45.jpg,Yes,0.1743,0.3671,0.5903,0.1052,5.4022,0.4587,0.1636,0.572,0.1275,46.7067,0.505
deepface/tests/dataset/img44.jpg,deepface/tests/dataset/img46.jpg,Yes,0.2682,0.4468,0.7324,0.2225,7.7975,0.667,0.1984,0.6299,0.1569,50.7309,0.5602
deepface/tests/dataset/img45.jpg,deepface/tests/dataset/img46.jpg,Yes,0.2818,0.486,0.7507,0.2239,7.8397,0.6692,0.1379,0.5252,0.193,56.6925,0.6213
deepface/tests/dataset/img24.jpg,deepface/tests/dataset/img25.jpg,Yes,0.1197,0.2833,0.4893,0.1419,6.4307,0.5327,0.1666,0.5773,0.2083,60.7717,0.6454
deepface/tests/dataset/img21.jpg,deepface/tests/dataset/img17.jpg,No,0.4907,0.531,0.9907,0.6285,13.4397,1.1212,0.807,1.2704,0.3363,67.5896,0.8201
deepface/tests/dataset/img23.jpg,deepface/tests/dataset/img47.jpg,No,0.5671,0.563,1.065,0.6961,13.8325,1.1799,0.1334,0.5166,0.2008,56.6182,0.6337
deepface/tests/dataset/img16.jpg,deepface/tests/dataset/img24.jpg,No,0.6046,0.5757,1.0997,0.9105,16.3487,1.3494,0.2078,0.6447,0.2218,57.4046,0.666
deepface/tests/dataset/img50.jpg,deepface/tests/dataset/img16.jpg,No,0.7308,0.7317,1.2089,1.0868,17.7134,1.4743,0.3578,0.846,0.2254,57.4293,0.6715
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img18.jpg,No,0.4197,0.569,0.9162,0.8173,13.1177,1.2786,0.6457,1.1364,0.3401,75.8425,0.8247
deepface/tests/dataset/img12.jpg,deepface/tests/dataset/img32.jpg,No,0.7555,0.9708,1.2293,1.0896,18.6004,1.4762,0.4448,0.9432,0.2547,60.7653,0.7138
deepface/tests/dataset/img51.jpg,deepface/tests/dataset/img26.jpg,No,0.506,0.5807,1.006,0.7329,14.3648,1.2107,0.2928,0.7652,0.2226,61.9764,0.6672
deepface/tests/dataset/img13.jpg,deepface/tests/dataset/img27.jpg,No,0.688,0.9511,1.1731,0.9559,15.8763,1.3827,0.3366,0.8205,0.2086,63.7428,0.6459
deepface/tests/dataset/img35.jpg,deepface/tests/dataset/img33.jpg,No,0.2131,0.3838,0.6528,0.5762,12.621,1.0735,0.3323,0.8153,0.2895,74.4074,0.7609
deepface/tests/dataset/img34.jpg,deepface/tests/dataset/img44.jpg,No,0.7964,0.6879,1.262,0.9531,16.8504,1.3806,0.4968,0.9968,0.2565,63.8992,0.7162
deepface/tests/dataset/img8.jpg,deepface/tests/dataset/img61.jpg,No,0.8548,0.6996,1.3075,0.9485,16.2825,1.3773,0.6479,1.1383,0.259,64.0582,0.7198
deepface/tests/dataset/img53.jpg,deepface/tests/dataset/img4.jpg,No,0.5862,0.6454,1.0828,0.8624,16.0416,1.3133,0.3185,0.7982,0.2397,65.712,0.6924
deepface/tests/dataset/img54.jpg,deepface/tests/dataset/img2.jpg,No,0.6948,0.9246,1.1788,0.9568,16.4217,1.3833,0.3481,0.8344,0.2497,64.7938,0.7067
deepface/tests/dataset/img43.jpg,deepface/tests/dataset/img24.jpg,No,0.7757,0.7407,1.2456,1.0007,16.8769,1.4147,0.4194,0.9159,0.3961,77.6798,0.8901
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img20.jpg,No,0.6784,0.7154,1.1648,0.9864,16.5342,1.4045,0.2043,0.6392,0.2499,67.3658,0.707
deepface/tests/dataset/img40.jpg,deepface/tests/dataset/img20.jpg,No,0.474,0.4904,0.9736,0.7949,14.8341,1.2609,0.4776,0.9773,0.2192,56.6904,0.6621
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img49.jpg,No,0.725,0.7156,1.2041,1.2676,18.7008,1.5922,0.3254,0.8068,0.1968,58.1537,0.6274
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img29.jpg,No,0.5496,0.5428,1.0484,1.1766,18.8394,1.534,0.2956,0.769,0.323,68.2188,0.8037
deepface/tests/dataset/img7.jpg,deepface/tests/dataset/img20.jpg,No,0.7791,0.7506,1.2482,0.945,16.0728,1.3748,0.2922,0.7645,0.2063,58.285,0.6424
deepface/tests/dataset/img54.jpg,deepface/tests/dataset/img10.jpg,No,0.6852,0.8904,1.1707,0.9223,16.2459,1.3582,0.3508,0.8377,0.2699,67.3228,0.7347
deepface/tests/dataset/img17.jpg,deepface/tests/dataset/img43.jpg,No,0.7785,0.7344,1.2478,0.8234,15.1735,1.2833,0.8461,1.3009,0.3715,74.2351,0.862
deepface/tests/dataset/img56.jpg,deepface/tests/dataset/img47.jpg,No,0.5798,0.6885,1.0769,0.9515,16.1507,1.3795,0.2527,0.7109,0.1453,51.4537,0.5391
deepface/tests/dataset/img10.jpg,deepface/tests/dataset/img15.jpg,No,0.7144,1.0202,1.1953,1.1267,17.5833,1.5012,0.7384,1.2152,0.404,87.858,0.8989
deepface/tests/dataset/img21.jpg,deepface/tests/dataset/img61.jpg,No,0.5642,0.5883,1.0623,0.7305,14.4227,1.2088,0.5523,1.051,0.3206,73.1845,0.8008
deepface/tests/dataset/img34.jpg,deepface/tests/dataset/img47.jpg,No,0.6442,0.5952,1.1351,1.0884,17.8754,1.4754,0.6225,1.1158,0.2549,64.7586,0.714
deepface/tests/dataset/img11.jpg,deepface/tests/dataset/img51.jpg,No,0.5459,0.6938,1.0448,0.7452,14.4984,1.2208,0.1807,0.6012,0.179,58.3078,0.5983
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img14.jpg,No,0.7235,0.8162,1.2029,1.0599,16.8526,1.4559,0.4242,0.9211,0.26,72.3704,0.7211
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img14.jpg,No,0.5044,0.637,1.0044,0.9856,16.5161,1.404,0.2733,0.7393,0.354,80.6472,0.8415
deepface/tests/dataset/img19.jpg,deepface/tests/dataset/img47.jpg,No,0.5752,0.6917,1.0726,1.0042,17.1669,1.4172,0.354,0.8414,0.1709,59.1711,0.5846
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img14.jpg,No,0.6473,0.7275,1.1378,0.9052,15.7543,1.3455,0.2127,0.6523,0.2293,67.2542,0.6771
deepface/tests/dataset/img20.jpg,deepface/tests/dataset/img33.jpg,No,0.4886,0.541,0.9885,0.9202,16.051,1.3566,0.6114,1.1058,0.253,62.6318,0.7113
deepface/tests/dataset/img39.jpg,deepface/tests/dataset/img62.jpg,No,0.4634,0.5606,0.9627,0.8783,16.0858,1.3254,0.7776,1.2471,0.329,70.4788,0.8112
deepface/tests/dataset/img54.jpg,deepface/tests/dataset/img58.jpg,No,0.6048,0.9477,1.0998,0.8084,15.0301,1.2716,0.6403,1.1316,0.3272,69.1393,0.809
deepface/tests/dataset/img11.jpg,deepface/tests/dataset/img9.jpg,No,0.6643,0.7784,1.1527,0.899,16.0335,1.3409,0.2452,0.7002,0.1639,56.0631,0.5725
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img46.jpg,No,0.5766,0.7054,1.0738,0.9264,15.9036,1.3611,0.1341,0.5179,0.2298,64.5324,0.6779
deepface/tests/dataset/img7.jpg,deepface/tests/dataset/img59.jpg,No,0.7679,0.8729,1.2393,1.0242,17.2778,1.4312,0.7789,1.2481,0.3103,69.694,0.7878
deepface/tests/dataset/img7.jpg,deepface/tests/dataset/img35.jpg,No,0.8227,0.8096,1.2827,1.0357,16.7157,1.4392,0.4864,0.9863,0.2401,68.9468,0.693
deepface/tests/dataset/img5.jpg,deepface/tests/dataset/img19.jpg,No,0.7052,0.752,1.1876,0.9084,16.1781,1.3479,0.2462,0.7016,0.1449,58.8831,0.5384
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img8.jpg,No,0.4891,0.5451,0.989,0.7908,14.9832,1.2576,0.2408,0.6939,0.2341,63.666,0.6843
deepface/tests/dataset/img22.jpg,deepface/tests/dataset/img51.jpg,No,0.5201,0.5378,1.0199,0.6262,13.2133,1.1191,0.1456,0.5397,0.2985,60.8239,0.7726
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img15.jpg,No,0.7147,0.9872,1.1956,1.0641,17.2349,1.4588,0.6229,1.1162,0.4049,89.7221,0.8998
deepface/tests/dataset/img19.jpg,deepface/tests/dataset/img29.jpg,No,0.3605,0.5646,0.8492,0.6901,14.6314,1.1748,0.1803,0.6005,0.2709,71.9655,0.7361
deepface/tests/dataset/img20.jpg,deepface/tests/dataset/img28.jpg,No,0.5807,0.6843,1.0777,0.8133,15.3844,1.2754,0.1274,0.5048,0.1841,53.6094,0.6067
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img13.jpg,No,0.6366,0.8086,1.1283,0.8832,15.8044,1.3291,0.3343,0.8177,0.177,57.373,0.5949
deepface/tests/dataset/img34.jpg,deepface/tests/dataset/img22.jpg,No,0.7842,0.6655,1.2523,1.137,18.5595,1.508,0.4797,0.9795,0.2457,56.695,0.7011
deepface/tests/dataset/img67.jpg,deepface/tests/dataset/img58.jpg,No,0.5051,0.8463,1.0051,0.8713,16.0723,1.3201,0.5281,1.0277,0.276,67.6933,0.743
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img9.jpg,No,0.7493,0.7683,1.2242,1.0774,17.7057,1.4679,0.5343,1.0337,0.2113,62.0197,0.65
deepface/tests/dataset/img11.jpg,deepface/tests/dataset/img58.jpg,No,0.7495,1.0309,1.2243,1.0359,16.9461,1.4394,0.6411,1.1324,0.2259,65.3131,0.6721
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img42.jpg,No,0.8335,0.8332,1.2911,1.0838,17.9617,1.4723,0.4051,0.9001,0.2449,66.4075,0.6999
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img13.jpg,No,0.476,0.7428,0.9757,1.1589,18.2018,1.5224,0.306,0.7823,0.1879,59.4531,0.6129
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img32.jpg,No,0.7116,0.8739,1.193,1.0402,17.6777,1.4424,0.6456,1.1363,0.2896,71.6141,0.761
deepface/tests/dataset/img67.jpg,deepface/tests/dataset/img37.jpg,No,0.4644,0.652,0.9638,0.6683,14.5099,1.1561,0.2355,0.6862,0.2475,61.9234,0.7036
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img7.jpg,No,0.8444,0.7812,1.2666,0.9357,16.3278,1.368,0.4702,1.459,0.4919,67.9214,0.7892
deepface/tests/dataset/img11.jpg,deepface/tests/dataset/img27.jpg,No,0.6496,0.8811,1.1398,0.9364,16.0727,1.3685,0.2416,0.6951,0.2127,66.7336,0.6523
deepface/tests/dataset/img20.jpg,deepface/tests/dataset/img47.jpg,No,0.6418,0.6011,1.1329,1.0579,16.9991,1.4546,0.31,0.7874,0.1754,54.6104,0.5924
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img44.jpg,No,0.4815,0.6806,0.9814,0.7396,14.1679,1.2162,0.2009,0.6338,0.1836,57.4368,0.606
deepface/tests/dataset/img28.jpg,deepface/tests/dataset/img24.jpg,No,0.7851,0.7588,1.2531,0.9406,16.8964,1.3715,0.5353,1.0347,0.2609,60.6589,0.7224
deepface/tests/dataset/img67.jpg,deepface/tests/dataset/img43.jpg,No,0.691,0.8328,1.1756,0.9621,16.9417,1.3872,0.3176,0.797,0.3072,72.9213,0.7838
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img51.jpg,No,0.668,0.7024,1.1558,1.1051,17.8105,1.4867,0.2508,0.7083,0.1882,58.3932,0.6135
deepface/tests/dataset/img11.jpg,deepface/tests/dataset/img24.jpg,No,0.79,0.801,1.257,1.1173,18.2579,1.4949,0.3437,0.829,0.3096,74.5014,0.7869
deepface/tests/dataset/img67.jpg,deepface/tests/dataset/img29.jpg,No,0.5389,0.6762,1.0382,0.8354,16.2507,1.2926,0.1501,0.5479,0.2668,63.7773,0.7305
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img59.jpg,No,0.4237,0.6225,0.9205,0.5002,12.4131,1.0002,0.6375,1.1292,0.2637,58.2849,0.7262
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img24.jpg,No,0.5431,0.5391,1.0422,1.1194,18.4041,1.4962,0.8286,1.2873,0.4458,74.1332,0.9442
deepface/tests/dataset/img35.jpg,deepface/tests/dataset/img27.jpg,No,0.821,0.9129,1.2814,0.964,15.9831,1.3885,0.4812,0.9811,0.3061,80.9221,0.7824
deepface/tests/dataset/img39.jpg,deepface/tests/dataset/img67.jpg,No,0.5513,0.7255,1.0501,0.9839,17.4219,1.4028,0.8181,1.2792,0.2914,66.5717,0.7634
deepface/tests/dataset/img39.jpg,deepface/tests/dataset/img12.jpg,No,0.6435,0.8102,1.1344,0.7661,15.2245,1.2378,0.7472,1.2224,0.2716,61.7006,0.737
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img46.jpg,No,0.8116,0.7634,1.2028,1.1264,17.9427,1.5009,0.9219,1.3578,0.3511,70.3501,0.838
deepface/tests/dataset/img32.jpg,deepface/tests/dataset/img27.jpg,No,0.7197,0.9593,1.1997,0.7295,14.4944,1.2079,0.5619,1.0601,0.2725,70.5338,0.7382
deepface/tests/dataset/img40.jpg,deepface/tests/dataset/img11.jpg,No,0.7205,0.7563,1.2004,0.9367,16.3131,1.3687,0.5427,1.0418,0.186,59.4748,0.61
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img22.jpg,No,0.5579,0.6466,1.2024,1.0076,17.2122,1.4196,0.7998,1.2648,0.392,65.4579,0.8854
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img35.jpg,No,0.8303,0.9037,1.2887,1.0988,17.1897,1.4824,0.498,0.998,0.2992,78.1653,0.7736
deepface/tests/dataset/img5.jpg,deepface/tests/dataset/img45.jpg,No,0.5247,0.6013,1.0244,0.8827,15.3713,1.3287,0.218,0.6603,0.2322,72.2019,0.6814
deepface/tests/dataset/img58.jpg,deepface/tests/dataset/img59.jpg,No,0.5937,0.9226,1.0896,0.9931,16.9142,1.4093,0.3525,0.8396,0.3095,68.0277,0.7868
deepface/tests/dataset/img40.jpg,deepface/tests/dataset/img45.jpg,No,0.772,0.6976,1.2426,1.0516,17.0626,1.4503,0.5487,1.0475,0.2628,63.7285,0.725
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img3.jpg,No,0.6417,0.6822,1.1329,0.832,15.8921,1.29,1.0374,1.4404,0.2312,54.5718,0.68
deepface/tests/dataset/img40.jpg,deepface/tests/dataset/img67.jpg,No,0.4138,0.5942,0.9098,0.948,16.9509,1.3769,0.5121,1.012,0.2455,61.9071,0.7008
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img50.jpg,No,0.5776,0.6934,1.0748,0.816,15.3649,1.2775,0.3515,0.8385,0.2072,61.657,0.6437
deepface/tests/dataset/img67.jpg,deepface/tests/dataset/img47.jpg,No,0.5726,0.692,1.0701,0.9987,17.2907,1.4133,0.4099,0.9054,0.1723,55.0701,0.587
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img20.jpg,No,0.684,0.6408,1.1696,0.924,16.3035,1.3594,0.2156,0.6566,0.2111,61.919,0.6498
deepface/tests/dataset/img13.jpg,deepface/tests/dataset/img33.jpg,No,0.4625,0.7042,0.9617,0.8709,15.4791,1.3198,0.5609,1.0591,0.3643,76.6864,0.8536
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img58.jpg,No,0.5732,0.8464,1.0707,0.7511,16.6216,1.4011,0.5091,1.009,0.3653,71.3439,0.8548
deepface/tests/dataset/img19.jpg,deepface/tests/dataset/img48.jpg,No,0.8186,0.8431,1.2795,1.1082,17.769,1.4888,0.3914,0.8848,0.2363,68.307,0.6875
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img49.jpg,No,0.6614,0.7617,1.1501,0.9935,16.5922,1.4096,0.427,0.9241,0.28,73.8384,0.7483
deepface/tests/dataset/img10.jpg,deepface/tests/dataset/img19.jpg,No,0.603,0.7998,1.0982,0.9508,16.8085,1.379,0.3546,0.8422,0.2352,69.7597,0.6859
deepface/tests/dataset/img48.jpg,deepface/tests/dataset/img17.jpg,No,0.8174,0.6679,1.2786,0.922,15.8462,1.3579,0.7438,1.2196,0.2545,59.7077,0.7134
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img2.jpg,No,0.6454,0.7751,1.1362,1.0674,17.3381,1.4611,0.1279,0.5058,0.1983,61.7554,0.6298
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img48.jpg,No,0.7325,0.7072,1.2605,0.8198,15.0575,1.2805,0.9352,1.3676,0.3504,69.8577,0.8371
deepface/tests/dataset/img30.jpg,deepface/tests/dataset/img44.jpg,No,0.8834,0.7196,1.3292,0.8683,15.5513,1.3178,0.563,1.0611,0.363,75.7833,0.8521
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img29.jpg,No,0.7666,0.7464,1.2382,1.0057,17.0345,1.4183,0.3434,0.8287,0.2411,64.6435,0.6943
deepface/tests/dataset/img19.jpg,deepface/tests/dataset/img26.jpg,No,0.6542,0.7763,1.1439,0.9204,16.7702,1.3568,0.2292,0.677,0.262,73.7273,0.7239
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img50.jpg,No,0.6879,0.692,1.1729,1.3134,19.7708,1.6207,0.5038,1.0038,0.2577,54.3931,0.7179
deepface/tests/dataset/img35.jpg,deepface/tests/dataset/img49.jpg,No,0.8339,0.8186,1.2915,1.2099,17.7753,1.5555,0.5957,1.0915,0.3315,82.3474,0.8142
deepface/tests/dataset/img22.jpg,deepface/tests/dataset/img28.jpg,No,0.6313,0.7037,1.1236,0.8177,15.5314,1.2789,0.2031,0.6373,0.2271,55.2529,0.6739
deepface/tests/dataset/img21.jpg,deepface/tests/dataset/img16.jpg,No,0.5678,0.6114,1.0657,0.6376,13.417,1.1293,0.4173,0.9136,0.2696,65.0241,0.7343
deepface/tests/dataset/img21.jpg,deepface/tests/dataset/img9.jpg,No,0.7653,0.7211,1.2372,1.0502,17.1485,1.4493,0.5726,1.0701,0.3059,68.2225,0.7822
deepface/tests/dataset/img2.jpg,deepface/tests/dataset/img22.jpg,No,0.6866,0.7895,1.1718,1.0005,16.6324,1.4145,0.1955,0.6253,0.3061,69.9331,0.7824
deepface/tests/dataset/img12.jpg,deepface/tests/dataset/img29.jpg,No,0.78,0.8337,1.249,1.1016,18.4797,1.4843,0.3404,0.8251,0.3293,67.3331,0.8115
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img37.jpg,No,0.7532,0.7788,1.2273,1.0976,17.7567,1.4816,0.2647,0.7275,0.331,74.5559,0.8137
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img16.jpg,No,0.7516,0.7581,1.226,1.0332,16.9971,1.4375,0.3815,0.8735,0.2859,72.0572,0.7561
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img33.jpg,No,0.4588,0.5085,0.958,1.2465,19.0695,1.5789,0.657,1.1463,0.3722,76.6896,0.8628
deepface/tests/dataset/img35.jpg,deepface/tests/dataset/img32.jpg,No,0.2651,0.5459,0.7282,0.5427,12.6429,1.0418,0.409,0.9045,0.2546,69.5802,0.7136
deepface/tests/dataset/img54.jpg,deepface/tests/dataset/img48.jpg,No,0.4528,0.678,0.9516,0.8385,15.166,1.295,0.2238,0.669,0.218,56.5099,0.6603
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img23.jpg,No,0.5305,0.5523,1.03,0.7766,14.6983,1.2463,0.1967,0.6272,0.2144,53.347,0.6549
deepface/tests/dataset/img39.jpg,deepface/tests/dataset/img33.jpg,No,0.5132,0.6067,1.0131,1.1197,17.8246,1.4965,0.2379,0.6898,0.2301,55.7862,0.6783
deepface/tests/dataset/img3.jpg,deepface/tests/dataset/img48.jpg,No,0.4123,0.5581,0.908,0.7879,14.8183,1.2553,0.2125,0.6519,0.2177,56.6639,0.6598
deepface/tests/dataset/img43.jpg,deepface/tests/dataset/img25.jpg,No,0.7819,0.7991,1.2505,0.9007,15.601,1.3422,0.4363,0.9341,0.3555,81.219,0.8432
deepface/tests/dataset/img14.jpg,deepface/tests/dataset/img9.jpg,No,0.7257,0.7829,1.2047,0.8679,15.1696,1.3175,0.5752,1.0725,0.2493,67.0315,0.7061
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img47.jpg,No,0.5391,0.6276,1.0383,0.7885,14.6406,1.2558,0.1013,0.4501,0.1756,57.5202,0.5926
deepface/tests/dataset/img18.jpg,deepface/tests/dataset/img28.jpg,No,0.8293,0.8828,1.2878,1.1151,18.3899,1.4934,0.497,0.997,0.2323,64.8263,0.6816
deepface/tests/dataset/img7.jpg,deepface/tests/dataset/img57.jpg,No,0.7468,0.815,1.2221,1.1241,17.3821,1.4994,0.6916,1.1761,0.2244,68.912,0.6699
deepface/tests/dataset/img48.jpg,deepface/tests/dataset/img26.jpg,No,0.5877,0.646,1.0842,0.9734,16.2582,1.3953,0.3102,0.7876,0.2059,60.3497,0.6417
deepface/tests/dataset/img19.jpg,deepface/tests/dataset/img34.jpg,No,0.2957,0.5193,0.7691,0.5281,12.9854,1.0277,0.5987,1.0943,0.2628,71.5029,0.725
deepface/tests/dataset/img41.jpg,deepface/tests/dataset/img37.jpg,No,0.4337,0.5351,0.9314,0.8568,16.0356,1.309,0.684,1.1696,0.3654,65.8114,0.8548
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img32.jpg,No,0.6985,0.8184,1.182,0.9682,16.9113,1.3915,0.5654,1.0634,0.3173,65.953,0.7967
deepface/tests/dataset/img12.jpg,deepface/tests/dataset/img57.jpg,No,0.6424,0.8305,1.1335,0.8361,15.6851,1.2931,0.5927,1.0888,0.2943,77.8234,0.7672
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img5.jpg,No,0.662,0.6012,1.1507,0.9931,16.5792,1.4093,0.137,0.5234,0.2182,70.8567,0.6606
deepface/tests/dataset/img47.jpg,deepface/tests/dataset/img61.jpg,No,0.6896,0.603,1.1744,0.98,16.5069,1.4,0.5598,1.0581,0.187,57.8252,0.6115
deepface/tests/dataset/img33.jpg,deepface/tests/dataset/img49.jpg,No,0.8253,0.7753,1.2848,1.0329,16.5833,1.4373,0.6695,1.1572,0.1992,58.9069,0.6313
deepface/tests/dataset/img54.jpg,deepface/tests/dataset/img1.jpg,No,0.5922,0.7522,1.0883,0.9398,16.3902,1.371,0.2515,0.7092,0.2836,62.9648,0.7532
deepface/tests/dataset/img29.jpg,deepface/tests/dataset/img25.jpg,No,0.5458,0.5846,1.0448,0.9074,16.167,1.3472,0.622,1.1153,0.2743,68.4542,0.7407
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img67.jpg,No,0.6649,0.7541,1.1531,1.1444,18.95,1.5129,0.3094,0.7866,0.2195,63.9684,0.6625
deepface/tests/dataset/img53.jpg,deepface/tests/dataset/img30.jpg,No,0.9492,0.7325,1.3778,0.9241,16.5521,1.3595,0.5533,1.052,0.2955,62.208,0.7687
deepface/tests/dataset/img6.jpg,deepface/tests/dataset/img25.jpg,No,0.8285,0.8131,1.2872,0.8051,14.8877,1.2689,0.4267,0.9238,0.3226,79.803,0.8032
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img43.jpg,No,0.6285,0.7443,1.1211,0.838,15.1848,1.2946,0.212,0.6511,0.2685,71.4046,0.7329
deepface/tests/dataset/img39.jpg,deepface/tests/dataset/img27.jpg,No,0.7176,0.8685,1.198,0.8199,14.9449,1.2805,0.8286,1.2873,0.285,71.6832,0.755
deepface/tests/dataset/img36.jpg,deepface/tests/dataset/img23.jpg,No,0.6223,0.5866,1.1156,1.0693,17.5747,1.4624,0.4266,0.9237,0.32,58.9248,0.7999
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img45.jpg,No,0.6021,0.7106,1.0973,0.9407,16.2744,1.3716,0.2162,0.6576,0.2166,64.3341,0.6582
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img19.jpg,No,0.356,0.5607,0.8437,0.9843,17.485,1.403,0.1858,0.6097,0.2867,75.4126,0.7572
deepface/tests/dataset/img55.jpg,deepface/tests/dataset/img17.jpg,No,0.7135,0.6076,1.1946,0.944,16.691,1.374,0.7449,1.2205,0.2951,70.5113,0.7682
deepface/tests/dataset/img9.jpg,deepface/tests/dataset/img59.jpg,No,0.8449,0.8766,1.2999,1.1333,18.3376,1.5055,0.8844,1.33,0.3088,67.5783,0.7859
deepface/tests/dataset/img58.jpg,deepface/tests/dataset/img49.jpg,No,0.5999,0.8901,1.0953,0.9147,15.3098,1.3526,0.4925,0.9925,0.2266,63.0835,0.6733
deepface/tests/dataset/img56.jpg,deepface/tests/dataset/img59.jpg,No,0.7694,0.9166,1.2405,1.0062,17.304,1.4186,0.8703,1.3193,0.2966,70.5446,0.7702
deepface/tests/dataset/img4.jpg,deepface/tests/dataset/img8.jpg,No,0.5753,0.6478,1.0727,0.842,15.2912,1.2977,0.3808,0.8727,0.1878,59.2,0.6129
deepface/tests/dataset/img16.jpg,deepface/tests/dataset/img25.jpg,No,0.5927,0.6271,1.0887,0.9862,16.5907,1.4044,0.286,0.7563,0.1702,56.0079,0.5835
deepface/tests/dataset/img50.jpg,deepface/tests/dataset/img45.jpg,No,0.5692,0.6912,1.067,0.8581,15.6737,1.3101,0.3278,0.8097,0.2383,60.6426,0.6903
deepface/tests/dataset/img38.jpg,deepface/tests/dataset/img31.jpg,No,0.4739,0.4751,0.9736,1.1148,18.1862,1.4932,0.6661,1.1542,0.331,70.516,0.8136
deepface/tests/dataset/img13.jpg,deepface/tests/dataset/img51.jpg,No,0.5639,0.7621,1.062,0.8047,14.7361,1.2686,0.4,0.8945,0.2308,60.6072,0.6795
deepface/tests/dataset/img1.jpg,deepface/tests/dataset/img33.jpg,No,0.7127,0.6418,1.1939,0.9433,16.1933,1.3736,0.6509,1.1409,0.2684,62.7672,0.7326
deepface/tests/dataset/img53.jpg,deepface/tests/dataset/img16.jpg,No,0.8344,0.7073,1.2918,0.9023,16.3918,1.3433,0.4153,0.9114,0.3045,65.6394,0.7803
deepface/tests/dataset/img53.jpg,deepface/tests/dataset/img23.jpg,No,0.4644,0.5199,0.9637,0.7267,14.6939,1.2056,0.1784,0.5973,0.2774,55.6833,0.7448
1 file_x file_y decision VGG-Face_cosine VGG-Face_euclidean VGG-Face_euclidean_l2 Facenet_cosine Facenet_euclidean Facenet_euclidean_l2 OpenFace_cosine OpenFace_euclidean_l2 DeepFace_cosine DeepFace_euclidean DeepFace_euclidean_l2
2 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img39.jpg Yes 0.2057 0.389 0.6414 0.1601 6.8679 0.5658 0.5925 1.0886 0.2554 61.3336 0.7147
3 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img40.jpg Yes 0.2117 0.3179 0.6508 0.2739 8.9049 0.7402 0.396 0.8899 0.2685 63.3747 0.7328
4 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img41.jpg Yes 0.1073 0.2482 0.4632 0.1257 6.1593 0.5014 0.7157 1.1964 0.2452 60.3454 0.7002
5 deepface/tests/dataset/img39.jpg deepface/tests/dataset/img40.jpg Yes 0.2991 0.4567 0.7734 0.3134 9.3798 0.7917 0.4941 0.9941 0.1703 45.1688 0.5836
6 deepface/tests/dataset/img39.jpg deepface/tests/dataset/img41.jpg Yes 0.1666 0.3542 0.5772 0.1502 6.6491 0.5481 0.2381 0.6901 0.2194 50.4356 0.6624
7 deepface/tests/dataset/img40.jpg deepface/tests/dataset/img41.jpg Yes 0.1706 0.3066 0.5841 0.2017 7.6423 0.6352 0.567 1.0649 0.2423 54.2499 0.6961
8 deepface/tests/dataset/img3.jpg deepface/tests/dataset/img12.jpg Yes 0.2533 0.5199 0.7118 0.4062 11.2632 0.9014 0.1908 0.6178 0.2337 58.8794 0.6837
9 deepface/tests/dataset/img3.jpg deepface/tests/dataset/img53.jpg Yes 0.1655 0.3567 0.5754 0.184 7.5388 0.6066 0.1465 0.5412 0.243 55.2642 0.6971
10 deepface/tests/dataset/img3.jpg deepface/tests/dataset/img54.jpg Yes 0.1982 0.4739 0.6297 0.406 11.0618 0.9011 0.1132 0.4758 0.1824 49.7875 0.6041
11 deepface/tests/dataset/img3.jpg deepface/tests/dataset/img55.jpg Yes 0.1835 0.3742 0.6057 0.1366 6.4168 0.5227 0.1755 0.5924 0.1697 55.179 0.5825
12 deepface/tests/dataset/img3.jpg deepface/tests/dataset/img56.jpg Yes 0.1652 0.4005 0.5748 0.1833 7.3432 0.6054 0.1803 0.6005 0.2061 59.007 0.642
13 deepface/tests/dataset/img12.jpg deepface/tests/dataset/img53.jpg Yes 0.372 0.6049 0.8626 0.3933 11.1382 0.8869 0.1068 0.4621 0.1633 48.5516 0.5715
14 deepface/tests/dataset/img12.jpg deepface/tests/dataset/img54.jpg Yes 0.2153 0.5145 0.6561 0.2694 9.1155 0.734 0.1943 0.6234 0.1881 52.7146 0.6133
15 deepface/tests/dataset/img12.jpg deepface/tests/dataset/img55.jpg Yes 0.3551 0.5941 0.8428 0.4726 12.0647 0.9722 0.1054 0.4591 0.1265 48.2432 0.5029
16 deepface/tests/dataset/img12.jpg deepface/tests/dataset/img56.jpg Yes 0.2826 0.565 0.7518 0.4761 11.9569 0.9758 0.1364 0.5224 0.1908 57.6735 0.6177
17 deepface/tests/dataset/img53.jpg deepface/tests/dataset/img54.jpg Yes 0.3363 0.593 0.8202 0.4627 11.8744 0.962 0.1964 0.6267 0.174 46.6212 0.5898
18 deepface/tests/dataset/img53.jpg deepface/tests/dataset/img55.jpg Yes 0.187 0.3313 0.6116 0.1625 7.0394 0.5701 0.1312 0.5123 0.1439 52.3132 0.5365
19 deepface/tests/dataset/img53.jpg deepface/tests/dataset/img56.jpg Yes 0.1385 0.3776 0.5263 0.141 6.4913 0.5311 0.1285 0.507 0.2005 58.0586 0.6332
20 deepface/tests/dataset/img54.jpg deepface/tests/dataset/img55.jpg Yes 0.3124 0.5756 0.7905 0.4033 10.944 0.8981 0.1738 0.5896 0.1351 49.8255 0.5198
21 deepface/tests/dataset/img54.jpg deepface/tests/dataset/img56.jpg Yes 0.2571 0.5473 0.717 0.3912 10.6329 0.8846 0.1802 0.6002 0.1648 53.0881 0.574
22 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img56.jpg Yes 0.2217 0.4543 0.6658 0.1433 6.4387 0.5353 0.1677 0.5792 0.1505 53.6812 0.5486
23 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img2.jpg Yes 0.2342 0.5033 0.6844 0.2508 8.2369 0.7082 0.0844 0.4109 0.2417 64.2748 0.6952
24 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img4.jpg Yes 0.2051 0.3916 0.6405 0.2766 8.7946 0.7437 0.1662 0.5766 0.2292 64.7785 0.6771
25 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img5.jpg Yes 0.2963 0.3948 0.7699 0.2696 8.4689 0.7343 0.0965 0.4393 0.2306 71.6647 0.679
26 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img6.jpg Yes 0.254 0.4464 0.7128 0.2164 7.7171 0.6579 0.0691 0.3718 0.2365 64.7594 0.6877
27 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img7.jpg Yes 0.3104 0.4764 0.7879 0.2112 7.5718 0.65 0.1027 0.4531 0.2385 61.371 0.6906
28 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img10.jpg Yes 0.3363 0.5448 0.8202 0.2129 7.6484 0.6525 0.0661 0.3635 0.2472 65.0668 0.7031
29 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img11.jpg Yes 0.3083 0.5416 0.7852 0.2042 7.6195 0.639 0.1626 0.5703 0.2001 61.3824 0.6326
30 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img4.jpg Yes 0.1397 0.3961 0.5285 0.1957 7.351 0.6256 0.2497 0.7066 0.1349 51.5853 0.5194
31 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img5.jpg Yes 0.1995 0.482 0.6317 0.1574 6.4195 0.561 0.1333 0.5164 0.1583 60.6365 0.5627
32 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img6.jpg Yes 0.0908 0.3251 0.4261 0.0787 4.625 0.3969 0.0632 0.3556 0.0756 38.218 0.3888
33 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img7.jpg Yes 0.2 0.4664 0.6325 0.1642 6.6261 0.5731 0.1049 0.4581 0.098 42.1113 0.4428
34 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img10.jpg Yes 0.2077 0.4862 0.6444 0.1593 6.5693 0.5644 0.0589 0.3431 0.1118 45.9168 0.4729
35 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img11.jpg Yes 0.2349 0.5235 0.6854 0.1869 7.2485 0.6114 0.1029 0.4536 0.1548 55.617 0.5564
36 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img5.jpg Yes 0.1991 0.3869 0.6311 0.1199 5.7256 0.4898 0.2891 0.7604 0.1797 64.7925 0.5995
37 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img6.jpg Yes 0.1937 0.4095 0.6224 0.1772 7.0495 0.5954 0.2199 0.6632 0.1788 59.9202 0.598
38 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img7.jpg Yes 0.245 0.4526 0.7 0.1663 6.7868 0.5767 0.3435 0.8289 0.1971 61.177 0.6279
39 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img10.jpg Yes 0.1882 0.4274 0.6136 0.1304 6.0445 0.5107 0.2052 0.6406 0.1239 49.4937 0.4979
40 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img11.jpg Yes 0.2569 0.5093 0.7168 0.1909 7.4277 0.618 0.2874 0.7582 0.1737 59.8839 0.5894
41 deepface/tests/dataset/img5.jpg deepface/tests/dataset/img6.jpg Yes 0.1858 0.3915 0.6095 0.1818 6.967 0.6029 0.13 0.5099 0.1742 63.6179 0.5903
42 deepface/tests/dataset/img5.jpg deepface/tests/dataset/img7.jpg Yes 0.2639 0.4391 0.7264 0.1754 6.7894 0.5923 0.1174 0.4846 0.1523 59.6056 0.5519
43 deepface/tests/dataset/img5.jpg deepface/tests/dataset/img10.jpg Yes 0.2013 0.4449 0.6344 0.1143 5.525 0.478 0.1228 0.4957 0.1942 66.7805 0.6232
44 deepface/tests/dataset/img5.jpg deepface/tests/dataset/img11.jpg Yes 0.3348 0.5599 0.8183 0.1975 7.4008 0.6285 0.2071 0.6436 0.1692 63.0817 0.5818
45 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img7.jpg Yes 0.192 0.4085 0.6196 0.1275 5.892 0.505 0.1004 0.4482 0.094 42.0465 0.4335
46 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img10.jpg Yes 0.214 0.4593 0.6542 0.1237 5.8374 0.4974 0.0517 0.3216 0.11 46.1197 0.4691
47 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img11.jpg Yes 0.2755 0.5319 0.7423 0.1772 7.1072 0.5953 0.1383 0.526 0.1771 59.9849 0.5951
48 deepface/tests/dataset/img7.jpg deepface/tests/dataset/img10.jpg Yes 0.3425 0.5729 0.8276 0.1708 6.8133 0.5845 0.0956 0.4374 0.1552 52.8909 0.5571
49 deepface/tests/dataset/img7.jpg deepface/tests/dataset/img11.jpg Yes 0.2912 0.5417 0.7632 0.2449 8.3025 0.6998 0.148 0.544 0.1894 60.469 0.6154
50 deepface/tests/dataset/img10.jpg deepface/tests/dataset/img11.jpg Yes 0.2535 0.5258 0.712 0.1371 6.2509 0.5237 0.0609 0.349 0.1851 60.8244 0.6085
51 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img19.jpg Yes 0.1043 0.3254 0.4567 0.1248 6.2382 0.4996 0.2563 0.7159 0.1712 60.1675 0.5851
52 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img67.jpg Yes 0.2197 0.4691 0.6629 0.2387 8.7124 0.6909 0.3072 0.7838 0.1839 58.9528 0.6065
53 deepface/tests/dataset/img19.jpg deepface/tests/dataset/img67.jpg Yes 0.1466 0.3965 0.5416 0.1321 6.5557 0.514 0.1504 0.5485 0.1517 55.8044 0.5508
54 deepface/tests/dataset/img20.jpg deepface/tests/dataset/img21.jpg Yes 0.0641 0.2068 0.3581 0.1052 5.4253 0.4586 0.1118 0.4729 0.2209 58.7235 0.6646
55 deepface/tests/dataset/img34.jpg deepface/tests/dataset/img35.jpg Yes 0.0959 0.2628 0.4381 0.2538 8.7003 0.7124 0.3727 0.8634 0.3244 78.4397 0.8055
56 deepface/tests/dataset/img34.jpg deepface/tests/dataset/img36.jpg Yes 0.1553 0.2918 0.5573 0.1861 7.5793 0.6101 0.399 0.8933 0.2923 61.625 0.7646
57 deepface/tests/dataset/img34.jpg deepface/tests/dataset/img37.jpg Yes 0.104 0.2651 0.4562 0.1192 6.0818 0.4882 0.4158 0.912 0.2853 62.1217 0.7554
58 deepface/tests/dataset/img35.jpg deepface/tests/dataset/img36.jpg Yes 0.2322 0.3945 0.6814 0.2049 7.6366 0.6401 0.38 0.8717 0.2991 74.4219 0.7735
59 deepface/tests/dataset/img35.jpg deepface/tests/dataset/img37.jpg Yes 0.1684 0.3516 0.5804 0.186 7.2991 0.6099 0.1662 0.5766 0.164 58.1125 0.5727
60 deepface/tests/dataset/img36.jpg deepface/tests/dataset/img37.jpg Yes 0.1084 0.2715 0.4655 0.1338 6.3075 0.5173 0.2909 0.7627 0.2687 54.7311 0.7331
61 deepface/tests/dataset/img22.jpg deepface/tests/dataset/img23.jpg Yes 0.3637 0.4569 0.8528 0.3501 9.9752 0.8368 0.1651 0.5746 0.1649 42.2178 0.5742
62 deepface/tests/dataset/img13.jpg deepface/tests/dataset/img14.jpg Yes 0.086 0.3384 0.4148 0.1104 5.3711 0.47 0.0952 0.4363 0.2043 61.8532 0.6392
63 deepface/tests/dataset/img13.jpg deepface/tests/dataset/img15.jpg Yes 0.1879 0.5589 0.6131 0.2317 7.9283 0.6808 0.3202 0.8003 0.3665 81.975 0.8562
64 deepface/tests/dataset/img13.jpg deepface/tests/dataset/img57.jpg Yes 0.1204 0.3952 0.4907 0.1897 7.1445 0.616 0.4599 0.9591 0.3266 82.6217 0.8082
65 deepface/tests/dataset/img13.jpg deepface/tests/dataset/img58.jpg Yes 0.1748 0.524 0.5913 0.2264 7.7484 0.6729 0.5006 1.0006 0.3476 75.6494 0.8338
66 deepface/tests/dataset/img14.jpg deepface/tests/dataset/img15.jpg Yes 0.1969 0.571 0.6275 0.2322 7.8197 0.6815 0.3409 0.8257 0.4076 89.3521 0.9029
67 deepface/tests/dataset/img14.jpg deepface/tests/dataset/img57.jpg Yes 0.1815 0.4206 0.6025 0.128 5.7838 0.5059 0.4251 0.9221 0.3284 84.7328 0.8105
68 deepface/tests/dataset/img14.jpg deepface/tests/dataset/img58.jpg Yes 0.2071 0.5609 0.6436 0.2125 7.384 0.6519 0.4993 0.9993 0.3848 83.0627 0.8772
69 deepface/tests/dataset/img15.jpg deepface/tests/dataset/img57.jpg Yes 0.198 0.5753 0.6293 0.2073 7.5025 0.6439 0.3957 0.8896 0.3881 91.551 0.881
70 deepface/tests/dataset/img15.jpg deepface/tests/dataset/img58.jpg Yes 0.1109 0.4424 0.4709 0.1106 5.4445 0.4702 0.2815 0.7503 0.4153 85.5012 0.9114
71 deepface/tests/dataset/img57.jpg deepface/tests/dataset/img58.jpg Yes 0.1581 0.5045 0.5624 0.1452 6.2094 0.5389 0.213 0.6528 0.2184 67.7741 0.6609
72 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img30.jpg Yes 0.142 0.28 0.5329 0.1759 7.1649 0.5931 0.3237 0.8046 0.272 59.7856 0.7375
73 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img31.jpg Yes 0.1525 0.2777 0.5523 0.1588 6.8613 0.5636 0.5027 1.0027 0.2 49.2171 0.6324
74 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img32.jpg Yes 0.1807 0.481 0.6011 0.1997 7.8571 0.632 0.4602 0.9594 0.3084 60.7837 0.7854
75 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img33.jpg Yes 0.1757 0.3177 0.5927 0.2406 8.3798 0.6937 0.3446 0.8302 0.1679 47.9061 0.5795
76 deepface/tests/dataset/img30.jpg deepface/tests/dataset/img31.jpg Yes 0.1141 0.2453 0.4776 0.1654 6.8805 0.5751 0.3189 0.7986 0.1897 51.344 0.6159
77 deepface/tests/dataset/img30.jpg deepface/tests/dataset/img32.jpg Yes 0.1567 0.4575 0.5597 0.1757 7.2731 0.5929 0.1712 0.5851 0.242 57.849 0.6957
78 deepface/tests/dataset/img30.jpg deepface/tests/dataset/img33.jpg Yes 0.1548 0.2997 0.5565 0.2074 7.6356 0.644 0.1744 0.5906 0.2601 61.9643 0.7213
79 deepface/tests/dataset/img31.jpg deepface/tests/dataset/img32.jpg Yes 0.1402 0.4725 0.5295 0.1009 5.5583 0.4493 0.2098 0.6478 0.2023 51.0814 0.6361
80 deepface/tests/dataset/img31.jpg deepface/tests/dataset/img33.jpg Yes 0.0895 0.2296 0.4232 0.1873 7.3261 0.6121 0.1871 0.6118 0.229 56.6939 0.6768
81 deepface/tests/dataset/img32.jpg deepface/tests/dataset/img33.jpg Yes 0.2035 0.4953 0.638 0.2415 8.5176 0.6949 0.2426 0.6965 0.2768 62.1742 0.744
82 deepface/tests/dataset/img8.jpg deepface/tests/dataset/img9.jpg Yes 0.3147 0.45 0.7933 0.1976 7.3714 0.6287 0.0997 0.4466 0.1695 48.8942 0.5822
83 deepface/tests/dataset/img8.jpg deepface/tests/dataset/img47.jpg Yes 0.3638 0.4564 0.853 0.1976 7.2952 0.6287 0.0931 0.4314 0.1869 54.8324 0.6114
84 deepface/tests/dataset/img8.jpg deepface/tests/dataset/img48.jpg Yes 0.3068 0.442 0.7834 0.2593 8.2334 0.7201 0.1319 0.5136 0.2194 55.6994 0.6624
85 deepface/tests/dataset/img8.jpg deepface/tests/dataset/img49.jpg Yes 0.2353 0.4246 0.686 0.1797 6.8592 0.5996 0.1472 0.5426 0.1904 57.1813 0.617
86 deepface/tests/dataset/img8.jpg deepface/tests/dataset/img50.jpg Yes 0.3583 0.5144 0.8465 0.24 8.2435 0.6928 0.132 0.5138 0.138 40.4616 0.5253
87 deepface/tests/dataset/img8.jpg deepface/tests/dataset/img51.jpg Yes 0.3446 0.4498 0.8301 0.1666 6.7177 0.5772 0.1413 0.5317 0.1656 46.6621 0.5756
88 deepface/tests/dataset/img9.jpg deepface/tests/dataset/img47.jpg Yes 0.3153 0.4374 0.7941 0.1772 6.9625 0.5953 0.1591 0.5641 0.1795 54.801 0.5992
89 deepface/tests/dataset/img9.jpg deepface/tests/dataset/img48.jpg Yes 0.3537 0.4845 0.8411 0.1723 6.7796 0.5871 0.1234 0.4969 0.1795 52.6507 0.5992
90 deepface/tests/dataset/img9.jpg deepface/tests/dataset/img49.jpg Yes 0.2072 0.4029 0.6437 0.1954 7.2154 0.6251 0.1529 0.553 0.1311 48.2847 0.5121
91 deepface/tests/dataset/img9.jpg deepface/tests/dataset/img50.jpg Yes 0.2662 0.4509 0.7296 0.2576 8.5935 0.7177 0.1531 0.5533 0.1205 41.6412 0.491
92 deepface/tests/dataset/img9.jpg deepface/tests/dataset/img51.jpg Yes 0.3282 0.4507 0.8102 0.2371 8.0755 0.6887 0.1873 0.612 0.1817 51.7388 0.6029
93 deepface/tests/dataset/img47.jpg deepface/tests/dataset/img48.jpg Yes 0.345 0.4542 0.8307 0.1613 6.4777 0.5679 0.1419 0.5328 0.1649 52.6864 0.5742
94 deepface/tests/dataset/img47.jpg deepface/tests/dataset/img49.jpg Yes 0.257 0.4382 0.717 0.1944 7.1101 0.6236 0.1089 0.4667 0.2415 66.6307 0.695
95 deepface/tests/dataset/img47.jpg deepface/tests/dataset/img50.jpg Yes 0.1844 0.3737 0.6073 0.215 7.7872 0.6558 0.1817 0.6029 0.2052 57.2133 0.6406
96 deepface/tests/dataset/img47.jpg deepface/tests/dataset/img51.jpg Yes 0.1979 0.3274 0.6291 0.1303 5.926 0.5106 0.0939 0.4334 0.1209 44.911 0.4918
97 deepface/tests/dataset/img48.jpg deepface/tests/dataset/img49.jpg Yes 0.2917 0.4744 0.7639 0.232 7.6321 0.6812 0.1067 0.462 0.2183 61.9241 0.6608
98 deepface/tests/dataset/img48.jpg deepface/tests/dataset/img50.jpg Yes 0.3985 0.5478 0.8927 0.2745 8.6847 0.7409 0.2245 0.6701 0.2181 55.6337 0.6605
99 deepface/tests/dataset/img48.jpg deepface/tests/dataset/img51.jpg Yes 0.3408 0.4563 0.8255 0.1586 6.4477 0.5633 0.1734 0.5888 0.2082 55.6445 0.6452
100 deepface/tests/dataset/img49.jpg deepface/tests/dataset/img50.jpg Yes 0.2073 0.4183 0.6439 0.2437 8.1889 0.6982 0.1738 0.5896 0.1949 57.7545 0.6243
101 deepface/tests/dataset/img49.jpg deepface/tests/dataset/img51.jpg Yes 0.2694 0.4491 0.7341 0.2076 7.3716 0.6444 0.1414 0.5318 0.2283 62.518 0.6758
102 deepface/tests/dataset/img50.jpg deepface/tests/dataset/img51.jpg Yes 0.2505 0.4295 0.7079 0.2299 8.07 0.6781 0.1894 0.6155 0.1715 47.5665 0.5857
103 deepface/tests/dataset/img16.jpg deepface/tests/dataset/img17.jpg Yes 0.2545 0.3759 0.7135 0.1493 6.5661 0.5465 0.2749 0.7414 0.1528 47.8128 0.5528
104 deepface/tests/dataset/img16.jpg deepface/tests/dataset/img59.jpg Yes 0.1796 0.4352 0.5993 0.3095 9.6361 0.7868 0.4173 0.9136 0.247 61.4867 0.7028
105 deepface/tests/dataset/img16.jpg deepface/tests/dataset/img61.jpg Yes 0.1779 0.3234 0.5965 0.1863 7.2985 0.6105 0.1407 0.5305 0.1643 53.2032 0.5732
106 deepface/tests/dataset/img16.jpg deepface/tests/dataset/img62.jpg Yes 0.106 0.2509 0.4604 0.2243 8.1191 0.6698 0.3857 0.8783 0.1953 57.434 0.6249
107 deepface/tests/dataset/img17.jpg deepface/tests/dataset/img59.jpg Yes 0.2519 0.5106 0.7099 0.2846 9.3099 0.7544 0.3877 0.8806 0.2994 62.5416 0.7739
108 deepface/tests/dataset/img17.jpg deepface/tests/dataset/img61.jpg Yes 0.2507 0.3495 0.708 0.1992 7.6132 0.6313 0.1867 0.6111 0.2101 58.2095 0.6482
109 deepface/tests/dataset/img17.jpg deepface/tests/dataset/img62.jpg Yes 0.2533 0.3415 0.7118 0.2672 8.9292 0.731 0.3356 0.8193 0.252 62.3621 0.7099
110 deepface/tests/dataset/img59.jpg deepface/tests/dataset/img61.jpg Yes 0.192 0.4543 0.6196 0.4417 11.5466 0.9399 0.3558 0.8435 0.1808 54.8373 0.6014
111 deepface/tests/dataset/img59.jpg deepface/tests/dataset/img62.jpg Yes 0.1123 0.3893 0.4738 0.2974 9.5874 0.7713 0.5393 1.0386 0.1934 55.9836 0.6219
112 deepface/tests/dataset/img61.jpg deepface/tests/dataset/img62.jpg Yes 0.1251 0.253 0.5002 0.2245 8.1525 0.6701 0.4072 0.9024 0.1757 55.867 0.5928
113 deepface/tests/dataset/img26.jpg deepface/tests/dataset/img27.jpg Yes 0.3059 0.5758 0.7822 0.3444 9.7537 0.8299 0.1815 0.6026 0.2396 69.4496 0.6922
114 deepface/tests/dataset/img26.jpg deepface/tests/dataset/img28.jpg Yes 0.343 0.5503 0.8282 0.3556 10.2896 0.8433 0.1662 0.5766 0.205 60.0105 0.6403
115 deepface/tests/dataset/img26.jpg deepface/tests/dataset/img42.jpg Yes 0.3852 0.542 0.8778 0.3278 9.7855 0.8097 0.2831 0.7524 0.2523 66.2702 0.7104
116 deepface/tests/dataset/img26.jpg deepface/tests/dataset/img43.jpg Yes 0.3254 0.5271 0.8067 0.2825 8.887 0.7517 0.2876 0.7585 0.3443 79.1342 0.8299
117 deepface/tests/dataset/img26.jpg deepface/tests/dataset/img44.jpg Yes 0.3645 0.5029 0.8539 0.2248 7.9975 0.6706 0.2646 0.7274 0.2572 68.2216 0.7173
118 deepface/tests/dataset/img26.jpg deepface/tests/dataset/img45.jpg Yes 0.283 0.4775 0.7523 0.2537 8.5109 0.7124 0.3277 0.8096 0.2726 70.5843 0.7384
119 deepface/tests/dataset/img26.jpg deepface/tests/dataset/img46.jpg Yes 0.447 0.5967 0.9456 0.4372 11.0907 0.9351 0.3544 0.8419 0.3079 73.7249 0.7848
120 deepface/tests/dataset/img27.jpg deepface/tests/dataset/img28.jpg Yes 0.2847 0.5707 0.7546 0.2178 7.8688 0.6601 0.1205 0.491 0.232 66.1474 0.6811
121 deepface/tests/dataset/img27.jpg deepface/tests/dataset/img42.jpg Yes 0.328 0.5946 0.8099 0.2829 8.8485 0.7523 0.3721 0.8627 0.2376 66.8304 0.6893
122 deepface/tests/dataset/img27.jpg deepface/tests/dataset/img43.jpg Yes 0.3781 0.65 0.8696 0.2827 8.6093 0.7519 0.2004 0.633 0.2924 75.1537 0.7647
123 deepface/tests/dataset/img27.jpg deepface/tests/dataset/img44.jpg Yes 0.3385 0.5968 0.8229 0.2597 8.3408 0.7207 0.2941 0.7669 0.2314 66.8603 0.6803
124 deepface/tests/dataset/img27.jpg deepface/tests/dataset/img45.jpg Yes 0.2302 0.5087 0.6785 0.147 6.2958 0.5422 0.2088 0.6463 0.2035 63.0117 0.6379
125 deepface/tests/dataset/img27.jpg deepface/tests/dataset/img46.jpg Yes 0.3461 0.6141 0.832 0.388 10.1318 0.881 0.264 0.7266 0.2241 65.3424 0.6694
126 deepface/tests/dataset/img28.jpg deepface/tests/dataset/img42.jpg Yes 0.2442 0.4668 0.6988 0.1991 7.7026 0.631 0.2848 0.7547 0.2583 62.2885 0.7187
127 deepface/tests/dataset/img28.jpg deepface/tests/dataset/img43.jpg Yes 0.2159 0.4542 0.657 0.2239 8.0122 0.6692 0.2194 0.6624 0.2833 67.7766 0.7527
128 deepface/tests/dataset/img28.jpg deepface/tests/dataset/img44.jpg Yes 0.2802 0.4883 0.7486 0.1697 7.0317 0.5826 0.2753 0.742 0.2378 61.8227 0.6897
129 deepface/tests/dataset/img28.jpg deepface/tests/dataset/img45.jpg Yes 0.3044 0.5286 0.7803 0.1768 7.1867 0.5946 0.267 0.7307 0.2683 66.1764 0.7326
130 deepface/tests/dataset/img28.jpg deepface/tests/dataset/img46.jpg Yes 0.426 0.6222 0.923 0.3338 9.8004 0.817 0.2481 0.7044 0.3072 68.9752 0.7838
131 deepface/tests/dataset/img42.jpg deepface/tests/dataset/img43.jpg Yes 0.2018 0.4174 0.6353 0.2418 8.227 0.6954 0.1678 0.5794 0.1483 49.1175 0.5446
132 deepface/tests/dataset/img42.jpg deepface/tests/dataset/img44.jpg Yes 0.1685 0.3458 0.5805 0.119 5.8252 0.4879 0.2432 0.6975 0.0957 39.352 0.4375
133 deepface/tests/dataset/img42.jpg deepface/tests/dataset/img45.jpg Yes 0.2004 0.4027 0.6331 0.1378 6.2772 0.5251 0.1982 0.6296 0.1742 53.3531 0.5903
134 deepface/tests/dataset/img42.jpg deepface/tests/dataset/img46.jpg Yes 0.2253 0.4245 0.6713 0.1946 7.4093 0.6239 0.1761 0.5934 0.1568 49.1856 0.5601
135 deepface/tests/dataset/img43.jpg deepface/tests/dataset/img44.jpg Yes 0.2049 0.4137 0.6402 0.2238 7.7899 0.6691 0.1748 0.5912 0.1553 51.4113 0.5573
136 deepface/tests/dataset/img43.jpg deepface/tests/dataset/img45.jpg Yes 0.2298 0.4524 0.6779 0.2281 7.8811 0.6754 0.0531 0.3257 0.1801 55.7173 0.6001
137 deepface/tests/dataset/img43.jpg deepface/tests/dataset/img46.jpg Yes 0.3731 0.5738 0.8638 0.3741 10.0121 0.865 0.1394 0.5281 0.2184 60.1165 0.6609
138 deepface/tests/dataset/img44.jpg deepface/tests/dataset/img45.jpg Yes 0.1743 0.3671 0.5903 0.1052 5.4022 0.4587 0.1636 0.572 0.1275 46.7067 0.505
139 deepface/tests/dataset/img44.jpg deepface/tests/dataset/img46.jpg Yes 0.2682 0.4468 0.7324 0.2225 7.7975 0.667 0.1984 0.6299 0.1569 50.7309 0.5602
140 deepface/tests/dataset/img45.jpg deepface/tests/dataset/img46.jpg Yes 0.2818 0.486 0.7507 0.2239 7.8397 0.6692 0.1379 0.5252 0.193 56.6925 0.6213
141 deepface/tests/dataset/img24.jpg deepface/tests/dataset/img25.jpg Yes 0.1197 0.2833 0.4893 0.1419 6.4307 0.5327 0.1666 0.5773 0.2083 60.7717 0.6454
142 deepface/tests/dataset/img21.jpg deepface/tests/dataset/img17.jpg No 0.4907 0.531 0.9907 0.6285 13.4397 1.1212 0.807 1.2704 0.3363 67.5896 0.8201
143 deepface/tests/dataset/img23.jpg deepface/tests/dataset/img47.jpg No 0.5671 0.563 1.065 0.6961 13.8325 1.1799 0.1334 0.5166 0.2008 56.6182 0.6337
144 deepface/tests/dataset/img16.jpg deepface/tests/dataset/img24.jpg No 0.6046 0.5757 1.0997 0.9105 16.3487 1.3494 0.2078 0.6447 0.2218 57.4046 0.666
145 deepface/tests/dataset/img50.jpg deepface/tests/dataset/img16.jpg No 0.7308 0.7317 1.2089 1.0868 17.7134 1.4743 0.3578 0.846 0.2254 57.4293 0.6715
146 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img18.jpg No 0.4197 0.569 0.9162 0.8173 13.1177 1.2786 0.6457 1.1364 0.3401 75.8425 0.8247
147 deepface/tests/dataset/img12.jpg deepface/tests/dataset/img32.jpg No 0.7555 0.9708 1.2293 1.0896 18.6004 1.4762 0.4448 0.9432 0.2547 60.7653 0.7138
148 deepface/tests/dataset/img51.jpg deepface/tests/dataset/img26.jpg No 0.506 0.5807 1.006 0.7329 14.3648 1.2107 0.2928 0.7652 0.2226 61.9764 0.6672
149 deepface/tests/dataset/img13.jpg deepface/tests/dataset/img27.jpg No 0.688 0.9511 1.1731 0.9559 15.8763 1.3827 0.3366 0.8205 0.2086 63.7428 0.6459
150 deepface/tests/dataset/img35.jpg deepface/tests/dataset/img33.jpg No 0.2131 0.3838 0.6528 0.5762 12.621 1.0735 0.3323 0.8153 0.2895 74.4074 0.7609
151 deepface/tests/dataset/img34.jpg deepface/tests/dataset/img44.jpg No 0.7964 0.6879 1.262 0.9531 16.8504 1.3806 0.4968 0.9968 0.2565 63.8992 0.7162
152 deepface/tests/dataset/img8.jpg deepface/tests/dataset/img61.jpg No 0.8548 0.6996 1.3075 0.9485 16.2825 1.3773 0.6479 1.1383 0.259 64.0582 0.7198
153 deepface/tests/dataset/img53.jpg deepface/tests/dataset/img4.jpg No 0.5862 0.6454 1.0828 0.8624 16.0416 1.3133 0.3185 0.7982 0.2397 65.712 0.6924
154 deepface/tests/dataset/img54.jpg deepface/tests/dataset/img2.jpg No 0.6948 0.9246 1.1788 0.9568 16.4217 1.3833 0.3481 0.8344 0.2497 64.7938 0.7067
155 deepface/tests/dataset/img43.jpg deepface/tests/dataset/img24.jpg No 0.7757 0.7407 1.2456 1.0007 16.8769 1.4147 0.4194 0.9159 0.3961 77.6798 0.8901
156 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img20.jpg No 0.6784 0.7154 1.1648 0.9864 16.5342 1.4045 0.2043 0.6392 0.2499 67.3658 0.707
157 deepface/tests/dataset/img40.jpg deepface/tests/dataset/img20.jpg No 0.474 0.4904 0.9736 0.7949 14.8341 1.2609 0.4776 0.9773 0.2192 56.6904 0.6621
158 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img49.jpg No 0.725 0.7156 1.2041 1.2676 18.7008 1.5922 0.3254 0.8068 0.1968 58.1537 0.6274
159 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img29.jpg No 0.5496 0.5428 1.0484 1.1766 18.8394 1.534 0.2956 0.769 0.323 68.2188 0.8037
160 deepface/tests/dataset/img7.jpg deepface/tests/dataset/img20.jpg No 0.7791 0.7506 1.2482 0.945 16.0728 1.3748 0.2922 0.7645 0.2063 58.285 0.6424
161 deepface/tests/dataset/img54.jpg deepface/tests/dataset/img10.jpg No 0.6852 0.8904 1.1707 0.9223 16.2459 1.3582 0.3508 0.8377 0.2699 67.3228 0.7347
162 deepface/tests/dataset/img17.jpg deepface/tests/dataset/img43.jpg No 0.7785 0.7344 1.2478 0.8234 15.1735 1.2833 0.8461 1.3009 0.3715 74.2351 0.862
163 deepface/tests/dataset/img56.jpg deepface/tests/dataset/img47.jpg No 0.5798 0.6885 1.0769 0.9515 16.1507 1.3795 0.2527 0.7109 0.1453 51.4537 0.5391
164 deepface/tests/dataset/img10.jpg deepface/tests/dataset/img15.jpg No 0.7144 1.0202 1.1953 1.1267 17.5833 1.5012 0.7384 1.2152 0.404 87.858 0.8989
165 deepface/tests/dataset/img21.jpg deepface/tests/dataset/img61.jpg No 0.5642 0.5883 1.0623 0.7305 14.4227 1.2088 0.5523 1.051 0.3206 73.1845 0.8008
166 deepface/tests/dataset/img34.jpg deepface/tests/dataset/img47.jpg No 0.6442 0.5952 1.1351 1.0884 17.8754 1.4754 0.6225 1.1158 0.2549 64.7586 0.714
167 deepface/tests/dataset/img11.jpg deepface/tests/dataset/img51.jpg No 0.5459 0.6938 1.0448 0.7452 14.4984 1.2208 0.1807 0.6012 0.179 58.3078 0.5983
168 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img14.jpg No 0.7235 0.8162 1.2029 1.0599 16.8526 1.4559 0.4242 0.9211 0.26 72.3704 0.7211
169 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img14.jpg No 0.5044 0.637 1.0044 0.9856 16.5161 1.404 0.2733 0.7393 0.354 80.6472 0.8415
170 deepface/tests/dataset/img19.jpg deepface/tests/dataset/img47.jpg No 0.5752 0.6917 1.0726 1.0042 17.1669 1.4172 0.354 0.8414 0.1709 59.1711 0.5846
171 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img14.jpg No 0.6473 0.7275 1.1378 0.9052 15.7543 1.3455 0.2127 0.6523 0.2293 67.2542 0.6771
172 deepface/tests/dataset/img20.jpg deepface/tests/dataset/img33.jpg No 0.4886 0.541 0.9885 0.9202 16.051 1.3566 0.6114 1.1058 0.253 62.6318 0.7113
173 deepface/tests/dataset/img39.jpg deepface/tests/dataset/img62.jpg No 0.4634 0.5606 0.9627 0.8783 16.0858 1.3254 0.7776 1.2471 0.329 70.4788 0.8112
174 deepface/tests/dataset/img54.jpg deepface/tests/dataset/img58.jpg No 0.6048 0.9477 1.0998 0.8084 15.0301 1.2716 0.6403 1.1316 0.3272 69.1393 0.809
175 deepface/tests/dataset/img11.jpg deepface/tests/dataset/img9.jpg No 0.6643 0.7784 1.1527 0.899 16.0335 1.3409 0.2452 0.7002 0.1639 56.0631 0.5725
176 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img46.jpg No 0.5766 0.7054 1.0738 0.9264 15.9036 1.3611 0.1341 0.5179 0.2298 64.5324 0.6779
177 deepface/tests/dataset/img7.jpg deepface/tests/dataset/img59.jpg No 0.7679 0.8729 1.2393 1.0242 17.2778 1.4312 0.7789 1.2481 0.3103 69.694 0.7878
178 deepface/tests/dataset/img7.jpg deepface/tests/dataset/img35.jpg No 0.8227 0.8096 1.2827 1.0357 16.7157 1.4392 0.4864 0.9863 0.2401 68.9468 0.693
179 deepface/tests/dataset/img5.jpg deepface/tests/dataset/img19.jpg No 0.7052 0.752 1.1876 0.9084 16.1781 1.3479 0.2462 0.7016 0.1449 58.8831 0.5384
180 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img8.jpg No 0.4891 0.5451 0.989 0.7908 14.9832 1.2576 0.2408 0.6939 0.2341 63.666 0.6843
181 deepface/tests/dataset/img22.jpg deepface/tests/dataset/img51.jpg No 0.5201 0.5378 1.0199 0.6262 13.2133 1.1191 0.1456 0.5397 0.2985 60.8239 0.7726
182 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img15.jpg No 0.7147 0.9872 1.1956 1.0641 17.2349 1.4588 0.6229 1.1162 0.4049 89.7221 0.8998
183 deepface/tests/dataset/img19.jpg deepface/tests/dataset/img29.jpg No 0.3605 0.5646 0.8492 0.6901 14.6314 1.1748 0.1803 0.6005 0.2709 71.9655 0.7361
184 deepface/tests/dataset/img20.jpg deepface/tests/dataset/img28.jpg No 0.5807 0.6843 1.0777 0.8133 15.3844 1.2754 0.1274 0.5048 0.1841 53.6094 0.6067
185 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img13.jpg No 0.6366 0.8086 1.1283 0.8832 15.8044 1.3291 0.3343 0.8177 0.177 57.373 0.5949
186 deepface/tests/dataset/img34.jpg deepface/tests/dataset/img22.jpg No 0.7842 0.6655 1.2523 1.137 18.5595 1.508 0.4797 0.9795 0.2457 56.695 0.7011
187 deepface/tests/dataset/img67.jpg deepface/tests/dataset/img58.jpg No 0.5051 0.8463 1.0051 0.8713 16.0723 1.3201 0.5281 1.0277 0.276 67.6933 0.743
188 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img9.jpg No 0.7493 0.7683 1.2242 1.0774 17.7057 1.4679 0.5343 1.0337 0.2113 62.0197 0.65
189 deepface/tests/dataset/img11.jpg deepface/tests/dataset/img58.jpg No 0.7495 1.0309 1.2243 1.0359 16.9461 1.4394 0.6411 1.1324 0.2259 65.3131 0.6721
190 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img42.jpg No 0.8335 0.8332 1.2911 1.0838 17.9617 1.4723 0.4051 0.9001 0.2449 66.4075 0.6999
191 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img13.jpg No 0.476 0.7428 0.9757 1.1589 18.2018 1.5224 0.306 0.7823 0.1879 59.4531 0.6129
192 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img32.jpg No 0.7116 0.8739 1.193 1.0402 17.6777 1.4424 0.6456 1.1363 0.2896 71.6141 0.761
193 deepface/tests/dataset/img67.jpg deepface/tests/dataset/img37.jpg No 0.4644 0.652 0.9638 0.6683 14.5099 1.1561 0.2355 0.6862 0.2475 61.9234 0.7036
194 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img7.jpg No 0.8444 0.7812 1.2666 0.9357 16.3278 1.368 0.4702 1.459 0.4919 67.9214 0.7892
195 deepface/tests/dataset/img11.jpg deepface/tests/dataset/img27.jpg No 0.6496 0.8811 1.1398 0.9364 16.0727 1.3685 0.2416 0.6951 0.2127 66.7336 0.6523
196 deepface/tests/dataset/img20.jpg deepface/tests/dataset/img47.jpg No 0.6418 0.6011 1.1329 1.0579 16.9991 1.4546 0.31 0.7874 0.1754 54.6104 0.5924
197 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img44.jpg No 0.4815 0.6806 0.9814 0.7396 14.1679 1.2162 0.2009 0.6338 0.1836 57.4368 0.606
198 deepface/tests/dataset/img28.jpg deepface/tests/dataset/img24.jpg No 0.7851 0.7588 1.2531 0.9406 16.8964 1.3715 0.5353 1.0347 0.2609 60.6589 0.7224
199 deepface/tests/dataset/img67.jpg deepface/tests/dataset/img43.jpg No 0.691 0.8328 1.1756 0.9621 16.9417 1.3872 0.3176 0.797 0.3072 72.9213 0.7838
200 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img51.jpg No 0.668 0.7024 1.1558 1.1051 17.8105 1.4867 0.2508 0.7083 0.1882 58.3932 0.6135
201 deepface/tests/dataset/img11.jpg deepface/tests/dataset/img24.jpg No 0.79 0.801 1.257 1.1173 18.2579 1.4949 0.3437 0.829 0.3096 74.5014 0.7869
202 deepface/tests/dataset/img67.jpg deepface/tests/dataset/img29.jpg No 0.5389 0.6762 1.0382 0.8354 16.2507 1.2926 0.1501 0.5479 0.2668 63.7773 0.7305
203 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img59.jpg No 0.4237 0.6225 0.9205 0.5002 12.4131 1.0002 0.6375 1.1292 0.2637 58.2849 0.7262
204 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img24.jpg No 0.5431 0.5391 1.0422 1.1194 18.4041 1.4962 0.8286 1.2873 0.4458 74.1332 0.9442
205 deepface/tests/dataset/img35.jpg deepface/tests/dataset/img27.jpg No 0.821 0.9129 1.2814 0.964 15.9831 1.3885 0.4812 0.9811 0.3061 80.9221 0.7824
206 deepface/tests/dataset/img39.jpg deepface/tests/dataset/img67.jpg No 0.5513 0.7255 1.0501 0.9839 17.4219 1.4028 0.8181 1.2792 0.2914 66.5717 0.7634
207 deepface/tests/dataset/img39.jpg deepface/tests/dataset/img12.jpg No 0.6435 0.8102 1.1344 0.7661 15.2245 1.2378 0.7472 1.2224 0.2716 61.7006 0.737
208 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img46.jpg No 0.8116 0.7634 1.2028 1.1264 17.9427 1.5009 0.9219 1.3578 0.3511 70.3501 0.838
209 deepface/tests/dataset/img32.jpg deepface/tests/dataset/img27.jpg No 0.7197 0.9593 1.1997 0.7295 14.4944 1.2079 0.5619 1.0601 0.2725 70.5338 0.7382
210 deepface/tests/dataset/img40.jpg deepface/tests/dataset/img11.jpg No 0.7205 0.7563 1.2004 0.9367 16.3131 1.3687 0.5427 1.0418 0.186 59.4748 0.61
211 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img22.jpg No 0.5579 0.6466 1.2024 1.0076 17.2122 1.4196 0.7998 1.2648 0.392 65.4579 0.8854
212 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img35.jpg No 0.8303 0.9037 1.2887 1.0988 17.1897 1.4824 0.498 0.998 0.2992 78.1653 0.7736
213 deepface/tests/dataset/img5.jpg deepface/tests/dataset/img45.jpg No 0.5247 0.6013 1.0244 0.8827 15.3713 1.3287 0.218 0.6603 0.2322 72.2019 0.6814
214 deepface/tests/dataset/img58.jpg deepface/tests/dataset/img59.jpg No 0.5937 0.9226 1.0896 0.9931 16.9142 1.4093 0.3525 0.8396 0.3095 68.0277 0.7868
215 deepface/tests/dataset/img40.jpg deepface/tests/dataset/img45.jpg No 0.772 0.6976 1.2426 1.0516 17.0626 1.4503 0.5487 1.0475 0.2628 63.7285 0.725
216 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img3.jpg No 0.6417 0.6822 1.1329 0.832 15.8921 1.29 1.0374 1.4404 0.2312 54.5718 0.68
217 deepface/tests/dataset/img40.jpg deepface/tests/dataset/img67.jpg No 0.4138 0.5942 0.9098 0.948 16.9509 1.3769 0.5121 1.012 0.2455 61.9071 0.7008
218 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img50.jpg No 0.5776 0.6934 1.0748 0.816 15.3649 1.2775 0.3515 0.8385 0.2072 61.657 0.6437
219 deepface/tests/dataset/img67.jpg deepface/tests/dataset/img47.jpg No 0.5726 0.692 1.0701 0.9987 17.2907 1.4133 0.4099 0.9054 0.1723 55.0701 0.587
220 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img20.jpg No 0.684 0.6408 1.1696 0.924 16.3035 1.3594 0.2156 0.6566 0.2111 61.919 0.6498
221 deepface/tests/dataset/img13.jpg deepface/tests/dataset/img33.jpg No 0.4625 0.7042 0.9617 0.8709 15.4791 1.3198 0.5609 1.0591 0.3643 76.6864 0.8536
222 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img58.jpg No 0.5732 0.8464 1.0707 0.7511 16.6216 1.4011 0.5091 1.009 0.3653 71.3439 0.8548
223 deepface/tests/dataset/img19.jpg deepface/tests/dataset/img48.jpg No 0.8186 0.8431 1.2795 1.1082 17.769 1.4888 0.3914 0.8848 0.2363 68.307 0.6875
224 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img49.jpg No 0.6614 0.7617 1.1501 0.9935 16.5922 1.4096 0.427 0.9241 0.28 73.8384 0.7483
225 deepface/tests/dataset/img10.jpg deepface/tests/dataset/img19.jpg No 0.603 0.7998 1.0982 0.9508 16.8085 1.379 0.3546 0.8422 0.2352 69.7597 0.6859
226 deepface/tests/dataset/img48.jpg deepface/tests/dataset/img17.jpg No 0.8174 0.6679 1.2786 0.922 15.8462 1.3579 0.7438 1.2196 0.2545 59.7077 0.7134
227 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img2.jpg No 0.6454 0.7751 1.1362 1.0674 17.3381 1.4611 0.1279 0.5058 0.1983 61.7554 0.6298
228 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img48.jpg No 0.7325 0.7072 1.2605 0.8198 15.0575 1.2805 0.9352 1.3676 0.3504 69.8577 0.8371
229 deepface/tests/dataset/img30.jpg deepface/tests/dataset/img44.jpg No 0.8834 0.7196 1.3292 0.8683 15.5513 1.3178 0.563 1.0611 0.363 75.7833 0.8521
230 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img29.jpg No 0.7666 0.7464 1.2382 1.0057 17.0345 1.4183 0.3434 0.8287 0.2411 64.6435 0.6943
231 deepface/tests/dataset/img19.jpg deepface/tests/dataset/img26.jpg No 0.6542 0.7763 1.1439 0.9204 16.7702 1.3568 0.2292 0.677 0.262 73.7273 0.7239
232 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img50.jpg No 0.6879 0.692 1.1729 1.3134 19.7708 1.6207 0.5038 1.0038 0.2577 54.3931 0.7179
233 deepface/tests/dataset/img35.jpg deepface/tests/dataset/img49.jpg No 0.8339 0.8186 1.2915 1.2099 17.7753 1.5555 0.5957 1.0915 0.3315 82.3474 0.8142
234 deepface/tests/dataset/img22.jpg deepface/tests/dataset/img28.jpg No 0.6313 0.7037 1.1236 0.8177 15.5314 1.2789 0.2031 0.6373 0.2271 55.2529 0.6739
235 deepface/tests/dataset/img21.jpg deepface/tests/dataset/img16.jpg No 0.5678 0.6114 1.0657 0.6376 13.417 1.1293 0.4173 0.9136 0.2696 65.0241 0.7343
236 deepface/tests/dataset/img21.jpg deepface/tests/dataset/img9.jpg No 0.7653 0.7211 1.2372 1.0502 17.1485 1.4493 0.5726 1.0701 0.3059 68.2225 0.7822
237 deepface/tests/dataset/img2.jpg deepface/tests/dataset/img22.jpg No 0.6866 0.7895 1.1718 1.0005 16.6324 1.4145 0.1955 0.6253 0.3061 69.9331 0.7824
238 deepface/tests/dataset/img12.jpg deepface/tests/dataset/img29.jpg No 0.78 0.8337 1.249 1.1016 18.4797 1.4843 0.3404 0.8251 0.3293 67.3331 0.8115
239 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img37.jpg No 0.7532 0.7788 1.2273 1.0976 17.7567 1.4816 0.2647 0.7275 0.331 74.5559 0.8137
240 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img16.jpg No 0.7516 0.7581 1.226 1.0332 16.9971 1.4375 0.3815 0.8735 0.2859 72.0572 0.7561
241 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img33.jpg No 0.4588 0.5085 0.958 1.2465 19.0695 1.5789 0.657 1.1463 0.3722 76.6896 0.8628
242 deepface/tests/dataset/img35.jpg deepface/tests/dataset/img32.jpg No 0.2651 0.5459 0.7282 0.5427 12.6429 1.0418 0.409 0.9045 0.2546 69.5802 0.7136
243 deepface/tests/dataset/img54.jpg deepface/tests/dataset/img48.jpg No 0.4528 0.678 0.9516 0.8385 15.166 1.295 0.2238 0.669 0.218 56.5099 0.6603
244 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img23.jpg No 0.5305 0.5523 1.03 0.7766 14.6983 1.2463 0.1967 0.6272 0.2144 53.347 0.6549
245 deepface/tests/dataset/img39.jpg deepface/tests/dataset/img33.jpg No 0.5132 0.6067 1.0131 1.1197 17.8246 1.4965 0.2379 0.6898 0.2301 55.7862 0.6783
246 deepface/tests/dataset/img3.jpg deepface/tests/dataset/img48.jpg No 0.4123 0.5581 0.908 0.7879 14.8183 1.2553 0.2125 0.6519 0.2177 56.6639 0.6598
247 deepface/tests/dataset/img43.jpg deepface/tests/dataset/img25.jpg No 0.7819 0.7991 1.2505 0.9007 15.601 1.3422 0.4363 0.9341 0.3555 81.219 0.8432
248 deepface/tests/dataset/img14.jpg deepface/tests/dataset/img9.jpg No 0.7257 0.7829 1.2047 0.8679 15.1696 1.3175 0.5752 1.0725 0.2493 67.0315 0.7061
249 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img47.jpg No 0.5391 0.6276 1.0383 0.7885 14.6406 1.2558 0.1013 0.4501 0.1756 57.5202 0.5926
250 deepface/tests/dataset/img18.jpg deepface/tests/dataset/img28.jpg No 0.8293 0.8828 1.2878 1.1151 18.3899 1.4934 0.497 0.997 0.2323 64.8263 0.6816
251 deepface/tests/dataset/img7.jpg deepface/tests/dataset/img57.jpg No 0.7468 0.815 1.2221 1.1241 17.3821 1.4994 0.6916 1.1761 0.2244 68.912 0.6699
252 deepface/tests/dataset/img48.jpg deepface/tests/dataset/img26.jpg No 0.5877 0.646 1.0842 0.9734 16.2582 1.3953 0.3102 0.7876 0.2059 60.3497 0.6417
253 deepface/tests/dataset/img19.jpg deepface/tests/dataset/img34.jpg No 0.2957 0.5193 0.7691 0.5281 12.9854 1.0277 0.5987 1.0943 0.2628 71.5029 0.725
254 deepface/tests/dataset/img41.jpg deepface/tests/dataset/img37.jpg No 0.4337 0.5351 0.9314 0.8568 16.0356 1.309 0.684 1.1696 0.3654 65.8114 0.8548
255 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img32.jpg No 0.6985 0.8184 1.182 0.9682 16.9113 1.3915 0.5654 1.0634 0.3173 65.953 0.7967
256 deepface/tests/dataset/img12.jpg deepface/tests/dataset/img57.jpg No 0.6424 0.8305 1.1335 0.8361 15.6851 1.2931 0.5927 1.0888 0.2943 77.8234 0.7672
257 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img5.jpg No 0.662 0.6012 1.1507 0.9931 16.5792 1.4093 0.137 0.5234 0.2182 70.8567 0.6606
258 deepface/tests/dataset/img47.jpg deepface/tests/dataset/img61.jpg No 0.6896 0.603 1.1744 0.98 16.5069 1.4 0.5598 1.0581 0.187 57.8252 0.6115
259 deepface/tests/dataset/img33.jpg deepface/tests/dataset/img49.jpg No 0.8253 0.7753 1.2848 1.0329 16.5833 1.4373 0.6695 1.1572 0.1992 58.9069 0.6313
260 deepface/tests/dataset/img54.jpg deepface/tests/dataset/img1.jpg No 0.5922 0.7522 1.0883 0.9398 16.3902 1.371 0.2515 0.7092 0.2836 62.9648 0.7532
261 deepface/tests/dataset/img29.jpg deepface/tests/dataset/img25.jpg No 0.5458 0.5846 1.0448 0.9074 16.167 1.3472 0.622 1.1153 0.2743 68.4542 0.7407
262 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img67.jpg No 0.6649 0.7541 1.1531 1.1444 18.95 1.5129 0.3094 0.7866 0.2195 63.9684 0.6625
263 deepface/tests/dataset/img53.jpg deepface/tests/dataset/img30.jpg No 0.9492 0.7325 1.3778 0.9241 16.5521 1.3595 0.5533 1.052 0.2955 62.208 0.7687
264 deepface/tests/dataset/img6.jpg deepface/tests/dataset/img25.jpg No 0.8285 0.8131 1.2872 0.8051 14.8877 1.2689 0.4267 0.9238 0.3226 79.803 0.8032
265 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img43.jpg No 0.6285 0.7443 1.1211 0.838 15.1848 1.2946 0.212 0.6511 0.2685 71.4046 0.7329
266 deepface/tests/dataset/img39.jpg deepface/tests/dataset/img27.jpg No 0.7176 0.8685 1.198 0.8199 14.9449 1.2805 0.8286 1.2873 0.285 71.6832 0.755
267 deepface/tests/dataset/img36.jpg deepface/tests/dataset/img23.jpg No 0.6223 0.5866 1.1156 1.0693 17.5747 1.4624 0.4266 0.9237 0.32 58.9248 0.7999
268 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img45.jpg No 0.6021 0.7106 1.0973 0.9407 16.2744 1.3716 0.2162 0.6576 0.2166 64.3341 0.6582
269 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img19.jpg No 0.356 0.5607 0.8437 0.9843 17.485 1.403 0.1858 0.6097 0.2867 75.4126 0.7572
270 deepface/tests/dataset/img55.jpg deepface/tests/dataset/img17.jpg No 0.7135 0.6076 1.1946 0.944 16.691 1.374 0.7449 1.2205 0.2951 70.5113 0.7682
271 deepface/tests/dataset/img9.jpg deepface/tests/dataset/img59.jpg No 0.8449 0.8766 1.2999 1.1333 18.3376 1.5055 0.8844 1.33 0.3088 67.5783 0.7859
272 deepface/tests/dataset/img58.jpg deepface/tests/dataset/img49.jpg No 0.5999 0.8901 1.0953 0.9147 15.3098 1.3526 0.4925 0.9925 0.2266 63.0835 0.6733
273 deepface/tests/dataset/img56.jpg deepface/tests/dataset/img59.jpg No 0.7694 0.9166 1.2405 1.0062 17.304 1.4186 0.8703 1.3193 0.2966 70.5446 0.7702
274 deepface/tests/dataset/img4.jpg deepface/tests/dataset/img8.jpg No 0.5753 0.6478 1.0727 0.842 15.2912 1.2977 0.3808 0.8727 0.1878 59.2 0.6129
275 deepface/tests/dataset/img16.jpg deepface/tests/dataset/img25.jpg No 0.5927 0.6271 1.0887 0.9862 16.5907 1.4044 0.286 0.7563 0.1702 56.0079 0.5835
276 deepface/tests/dataset/img50.jpg deepface/tests/dataset/img45.jpg No 0.5692 0.6912 1.067 0.8581 15.6737 1.3101 0.3278 0.8097 0.2383 60.6426 0.6903
277 deepface/tests/dataset/img38.jpg deepface/tests/dataset/img31.jpg No 0.4739 0.4751 0.9736 1.1148 18.1862 1.4932 0.6661 1.1542 0.331 70.516 0.8136
278 deepface/tests/dataset/img13.jpg deepface/tests/dataset/img51.jpg No 0.5639 0.7621 1.062 0.8047 14.7361 1.2686 0.4 0.8945 0.2308 60.6072 0.6795
279 deepface/tests/dataset/img1.jpg deepface/tests/dataset/img33.jpg No 0.7127 0.6418 1.1939 0.9433 16.1933 1.3736 0.6509 1.1409 0.2684 62.7672 0.7326
280 deepface/tests/dataset/img53.jpg deepface/tests/dataset/img16.jpg No 0.8344 0.7073 1.2918 0.9023 16.3918 1.3433 0.4153 0.9114 0.3045 65.6394 0.7803
281 deepface/tests/dataset/img53.jpg deepface/tests/dataset/img23.jpg No 0.4644 0.5199 0.9637 0.7267 14.6939 1.2056 0.1784 0.5973 0.2774 55.6833 0.7448

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 511 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 396 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 339 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 536 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Some files were not shown because too many files have changed in this diff Show More