add deepface

This commit is contained in:
carl 2023-05-23 23:54:12 -03:00
parent 946c185225
commit 09d15f349e
580 changed files with 73128 additions and 435 deletions

1
downloads/README Normal file
View File

@ -0,0 +1 @@

BIN
downloads/drivers.deb Normal file

Binary file not shown.

View File

@ -0,0 +1,21 @@
# http://editorconfig.org
root = true
[*]
indent_style = space
indent_size = 4
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8
end_of_line = lf
[*.bat]
indent_style = tab
end_of_line = crlf
[LICENSE]
insert_final_newline = false
[Makefile]
indent_style = tab

View File

@ -0,0 +1,16 @@
* face_recognition version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```

View File

@ -0,0 +1,24 @@
name: CI
on: push
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, "3.10"]
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
pip install setuptools wheel
pip install .
pip install tox-gh-actions
- name: Check package setup
run: python setup.py check
- name: Test
run: tox

View File

@ -0,0 +1,65 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
.DS_Store
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# pyenv python configuration file
.python-version
.idea/

View File

@ -0,0 +1,16 @@
=======
Authors
=======
* Adam Geitgey <ageitgey@gmail.com>
Thanks
------
* Many, many thanks to Davis King (@nulhom)
for creating dlib and for providing the trained facial feature detection and face encoding models
used in this library.
* Thanks to everyone who works on all the awesome Python data science libraries like numpy, scipy, scikit-image,
pillow, etc, etc that makes this kind of stuff so easy and fun in Python.
* Thanks to Cookiecutter and the audreyr/cookiecutter-pypackage project template
for making Python project packaging way more tolerable.

View File

@ -0,0 +1,95 @@
.. highlight:: shell
============
Contributing
============
Contributions are welcome, and they are greatly appreciated! Every
little bit helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions
----------------------
Report Bugs
~~~~~~~~~~~
Report bugs at https://github.com/ageitgey/face_recognition/issues.
If you are reporting a bug, please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
Submit Feedback
~~~~~~~~~~~~~~~
The best way to send feedback is to file an issue at https://github.com/ageitgey/face_recognition/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions
are welcome :)
Get Started!
------------
Ready to contribute? Here's how to set up `face_recognition` for local development.
1. Fork the `face_recognition` repo on GitHub.
2. Clone your fork locally::
$ git clone git@github.com:your_name_here/face_recognition.git
3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development::
$ mkvirtualenv face_recognition
$ cd face_recognition/
$ python setup.py develop
4. Create a branch for local development::
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox::
$ flake8 face_recognition tests
$ python setup.py test or py.test
$ tox
To get flake8 and tox, just pip install them into your virtualenv.
6. Commit your changes and push your branch to GitHub::
$ git add .
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature
7. Submit a pull request through the GitHub website.
Pull Request Guidelines
-----------------------
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 2.7, 3.5, 3.6, 3.7 and 3.8, and for PyPy. Check
https://travis-ci.org/ageitgey/face_recognition/pull_requests
and make sure that the tests pass for all supported Python versions.
Tips
----
To run a subset of tests::
$ python -m unittest tests.test_face_recognition

View File

@ -0,0 +1,52 @@
# This is a sample Dockerfile you can modify to deploy your own app based on face_recognition
FROM python:3.10.3-slim-bullseye
RUN apt-get -y update
RUN apt-get install -y --fix-missing \
build-essential \
cmake \
gfortran \
git \
wget \
curl \
graphicsmagick \
libgraphicsmagick1-dev \
libatlas-base-dev \
libavcodec-dev \
libavformat-dev \
libgtk2.0-dev \
libjpeg-dev \
liblapack-dev \
libswscale-dev \
pkg-config \
python3-dev \
python3-numpy \
software-properties-common \
zip \
&& apt-get clean && rm -rf /tmp/* /var/tmp/*
RUN cd ~ && \
mkdir -p dlib && \
git clone -b 'v19.9' --single-branch https://github.com/davisking/dlib.git dlib/ && \
cd dlib/ && \
python3 setup.py install --yes USE_AVX_INSTRUCTIONS
# The rest of this file just runs an example script.
# If you wanted to use this Dockerfile to run your own app instead, maybe you would do this:
# COPY . /root/your_app_or_whatever
# RUN cd /root/your_app_or_whatever && \
# pip3 install -r requirements.txt
# RUN whatever_command_you_run_to_start_your_app
COPY . /root/face_recognition
RUN cd /root/face_recognition && \
pip3 install -r requirements.txt && \
python3 setup.py install
# Add pip3 install opencv-python==4.1.2.30 if you want to run the live webcam examples
CMD cd /root/face_recognition/examples && \
python3 recognize_faces_in_pictures.py

View File

@ -0,0 +1,40 @@
# This is a sample Dockerfile you can modify to deploy your own app based on face_recognition on the GPU
# In order to run Docker in the GPU you will need to install Nvidia-Docker: https://github.com/NVIDIA/nvidia-docker
FROM nvidia/cuda:9.0-cudnn7-devel
# Install face recognition dependencies
RUN apt update -y; apt install -y \
git \
cmake \
libsm6 \
libxext6 \
libxrender-dev \
python3 \
python3-pip
RUN pip3 install scikit-build
# Install compilers
RUN apt install -y software-properties-common
RUN add-apt-repository ppa:ubuntu-toolchain-r/test
RUN apt update -y; apt install -y gcc-6 g++-6
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-6 50
RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-6 50
#Install dlib
RUN git clone -b 'v19.16' --single-branch https://github.com/davisking/dlib.git
RUN mkdir -p /dlib/build
RUN cmake -H/dlib -B/dlib/build -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1
RUN cmake --build /dlib/build
RUN cd /dlib; python3 /dlib/setup.py install
# Install the face recognition package
RUN pip3 install face_recognition

View File

@ -0,0 +1,138 @@
History
=======
1.4.0 (2020-09-26)
------------------
* Dropping support for Python 2.x
* --upsample a parameter for command line face_recognition
1.3.0 (2020-02-20)
------------------
* Drop support for Python 3.4 and add 3.8
* Blink detection example
1.2.3 (2018-08-21)
------------------
* You can now pass model="small" to face_landmarks() to use the 5-point face model instead of the 68-point model.
* Now officially supporting Python 3.7
* New example of using this library in a Jupyter Notebook
1.2.2 (2018-04-02)
------------------
* Added the face_detection CLI command
* Removed dependencies on scipy to make installation easier
* Cleaned up KNN example and fixed a bug with drawing fonts to label detected faces in the demo
1.2.1 (2018-02-01)
------------------
* Fixed version numbering inside of module code.
1.2.0 (2018-02-01)
------------------
* Fixed a bug where batch size parameter didn't work correctly when doing batch face detections on GPU.
* Updated OpenCV examples to do proper BGR -> RGB conversion
* Updated webcam examples to avoid common mistakes and reduce support questions
* Added a KNN classification example
* Added an example of automatically blurring faces in images or videos
* Updated Dockerfile example to use dlib v19.9 which removes the boost dependency.
1.1.0 (2017-09-23)
------------------
* Will use dlib's 5-point face pose estimator when possible for speed (instead of 68-point face pose esimator)
* dlib v19.7 is now the minimum required version
* face_recognition_models v0.3.0 is now the minimum required version
1.0.0 (2017-08-29)
------------------
* Added support for dlib's CNN face detection model via model="cnn" parameter on face detecion call
* Added support for GPU batched face detections using dlib's CNN face detector model
* Added find_faces_in_picture_cnn.py to examples
* Added find_faces_in_batches.py to examples
* Added face_rec_from_video_file.py to examples
* dlib v19.5 is now the minimum required version
* face_recognition_models v0.2.0 is now the minimum required version
0.2.2 (2017-07-07)
------------------
* Added --show-distance to cli
* Fixed a bug where --tolerance was ignored in cli if testing a single image
* Added benchmark.py to examples
0.2.1 (2017-07-03)
------------------
* Added --tolerance to cli
0.2.0 (2017-06-03)
------------------
* The CLI can now take advantage of multiple CPUs. Just pass in the -cpus X parameter where X is the number of CPUs to use.
* Added face_distance.py example
* Improved CLI tests to actually test the CLI functionality
* Updated facerec_on_raspberry_pi.py to capture in rgb (not bgr) format.
0.1.14 (2017-04-22)
-------------------
* Fixed a ValueError crash when using the CLI on Python 2.7
0.1.13 (2017-04-20)
-------------------
* Raspberry Pi support.
0.1.12 (2017-04-13)
-------------------
* Fixed: Face landmarks wasn't returning all chin points.
0.1.11 (2017-03-30)
-------------------
* Fixed a minor bug in the command-line interface.
0.1.10 (2017-03-21)
-------------------
* Minor pref improvements with face comparisons.
* Test updates.
0.1.9 (2017-03-16)
------------------
* Fix minimum scipy version required.
0.1.8 (2017-03-16)
------------------
* Fix missing Pillow dependency.
0.1.7 (2017-03-13)
------------------
* First working release.

View File

@ -0,0 +1,11 @@
MIT License
Copyright (c) 2021, Adam Geitgey
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -0,0 +1,13 @@
include AUTHORS.rst
include CONTRIBUTING.rst
include HISTORY.rst
include LICENSE
include README.rst
recursive-include tests *
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
recursive-include docs *.rst conf.py Makefile make.bat *.jpg *.png *.gif

View File

@ -0,0 +1,88 @@
.PHONY: clean clean-test clean-pyc clean-build docs help
.DEFAULT_GOAL := help
define BROWSER_PYSCRIPT
import os, webbrowser, sys
try:
from urllib import pathname2url
except:
from urllib.request import pathname2url
webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1])))
endef
export BROWSER_PYSCRIPT
define PRINT_HELP_PYSCRIPT
import re, sys
for line in sys.stdin:
match = re.match(r'^([a-zA-Z_-]+):.*?## (.*)$$', line)
if match:
target, help = match.groups()
print("%-20s %s" % (target, help))
endef
export PRINT_HELP_PYSCRIPT
BROWSER := python3 -c "$$BROWSER_PYSCRIPT"
help:
@python3 -c "$$PRINT_HELP_PYSCRIPT" < $(MAKEFILE_LIST)
clean: clean-build clean-pyc clean-test ## remove all build, test, coverage and Python artifacts
clean-build: ## remove build artifacts
rm -fr build/
rm -fr dist/
rm -fr .eggs/
find . -name '*.egg-info' -exec rm -fr {} +
find . -name '*.egg' -exec rm -f {} +
clean-pyc: ## remove Python file artifacts
find . -name '*.pyc' -exec rm -f {} +
find . -name '*.pyo' -exec rm -f {} +
find . -name '*~' -exec rm -f {} +
find . -name '__pycache__' -exec rm -fr {} +
clean-test: ## remove test and coverage artifacts
rm -fr .tox/
rm -f .coverage
rm -fr htmlcov/
lint: ## check style with flake8
flake8 face_recognition tests
test: ## run tests quickly with the default Python
python3 setup.py test
test-all: ## run tests on every Python version with tox
tox
coverage: ## check code coverage quickly with the default Python
coverage run --source face_recognition setup.py test
coverage report -m
coverage html
$(BROWSER) htmlcov/index.html
docs: ## generate Sphinx HTML documentation, including API docs
sphinx-apidoc -o docs/ face_recognition
$(MAKE) -C docs clean
$(MAKE) -C docs html
$(BROWSER) docs/_build/html/index.html
servedocs: docs ## compile the docs watching for changes
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .
release: clean ## package and upload a release
python3 setup.py sdist
python3 setup.py bdist_wheel
twine upload dist/*
dist: clean ## builds source and wheel package
python3 setup.py sdist
python3 setup.py bdist_wheel
ls -l dist
install: clean ## install the package to the active Python's site-packages
python3 setup.py install

View File

@ -0,0 +1,415 @@
# Face Recognition
_You can also read a translated version of this file [in Chinese 简体中文版](https://github.com/ageitgey/face_recognition/blob/master/README_Simplified_Chinese.md) or [in Korean 한국어](https://github.com/ageitgey/face_recognition/blob/master/README_Korean.md) or [in Japanese 日本語](https://github.com/m-i-k-i/face_recognition/blob/master/README_Japanese.md)._
Recognize and manipulate faces from Python or from the command line with
the world's simplest face recognition library.
Built using [dlib](http://dlib.net/)'s state-of-the-art face recognition
built with deep learning. The model has an accuracy of 99.38% on the
[Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/) benchmark.
This also provides a simple `face_recognition` command line tool that lets
you do face recognition on a folder of images from the command line!
[![PyPI](https://img.shields.io/pypi/v/face_recognition.svg)](https://pypi.python.org/pypi/face_recognition)
[![Build Status](https://github.com/ageitgey/face_recognition/workflows/CI/badge.svg?branch=master&event=push)](https://github.com/ageitgey/face_recognition/actions?query=workflow%3ACI)
[![Documentation Status](https://readthedocs.org/projects/face-recognition/badge/?version=latest)](http://face-recognition.readthedocs.io/en/latest/?badge=latest)
## Features
#### Find faces in pictures
Find all the faces that appear in a picture:
![](https://cloud.githubusercontent.com/assets/896692/23625227/42c65360-025d-11e7-94ea-b12f28cb34b4.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
```
#### Find and manipulate facial features in pictures
Get the locations and outlines of each person's eyes, nose, mouth and chin.
![](https://cloud.githubusercontent.com/assets/896692/23625282/7f2d79dc-025d-11e7-8728-d8924596f8fa.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
```
Finding facial features is super useful for lots of important stuff. But you can also use it for really stupid stuff
like applying [digital make-up](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py) (think 'Meitu'):
![](https://cloud.githubusercontent.com/assets/896692/23625283/80638760-025d-11e7-80a2-1d2779f7ccab.png)
#### Identify faces in pictures
Recognize who appears in each photo.
![](https://cloud.githubusercontent.com/assets/896692/23625229/45e049b6-025d-11e7-89cc-8a71cf89e713.png)
```python
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
```
You can even use this library with other Python libraries to do real-time face recognition:
![](https://cloud.githubusercontent.com/assets/896692/24430398/36f0e3f0-13cb-11e7-8258-4d0c9ce1e419.gif)
See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py) for the code.
## Online Demos
User-contributed shared Jupyter notebook demo (not officially supported): [![Deepnote](https://beta.deepnote.org/buttons/try-in-a-jupyter-notebook.svg)](https://beta.deepnote.org/launch?template=face_recognition)
## Installation
### Requirements
* Python 3.3+ or Python 2.7
* macOS or Linux (Windows not officially supported, but might work)
### Installation Options:
#### Installing on Mac or Linux
First, make sure you have dlib already installed with Python bindings:
* [How to install dlib from source on macOS or Ubuntu](https://gist.github.com/ageitgey/629d75c1baac34dfa5ca2a1928a7aeaf)
Then, make sure you have cmake installed:
```brew install cmake```
Finally, install this module from pypi using `pip3` (or `pip2` for Python 2):
```bash
pip3 install face_recognition
```
Alternatively, you can try this library with [Docker](https://www.docker.com/), see [this section](#deployment).
If you are having trouble with installation, you can also try out a
[pre-configured VM](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b).
#### Installing on an Nvidia Jetson Nano board
* [Jetson Nano installation instructions](https://medium.com/@ageitgey/build-a-hardware-based-face-recognition-system-for-150-with-the-nvidia-jetson-nano-and-python-a25cb8c891fd)
* Please follow the instructions in the article carefully. There is current a bug in the CUDA libraries on the Jetson Nano that will cause this library to fail silently if you don't follow the instructions in the article to comment out a line in dlib and recompile it.
#### Installing on Raspberry Pi 2+
* [Raspberry Pi 2+ installation instructions](https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65)
#### Installing on FreeBSD
```bash
pkg install graphics/py-face_recognition
```
#### Installing on Windows
While Windows isn't officially supported, helpful users have posted instructions on how to install this library:
* [@masoudr's Windows 10 installation guide (dlib + face_recognition)](https://github.com/ageitgey/face_recognition/issues/175#issue-257710508)
#### Installing a pre-configured Virtual Machine image
* [Download the pre-configured VM image](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b) (for VMware Player or VirtualBox).
## Usage
### Command-Line Interface
When you install `face_recognition`, you get two simple command-line
programs:
* `face_recognition` - Recognize faces in a photograph or folder full for
photographs.
* `face_detection` - Find faces in a photograph or folder full for photographs.
#### `face_recognition` command line tool
The `face_recognition` command lets you recognize faces in a photograph or
folder full for photographs.
First, you need to provide a folder with one picture of each person you
already know. There should be one image file for each person with the
files named according to who is in the picture:
![known](https://cloud.githubusercontent.com/assets/896692/23582466/8324810e-00df-11e7-82cf-41515eba704d.png)
Next, you need a second folder with the files you want to identify:
![unknown](https://cloud.githubusercontent.com/assets/896692/23582465/81f422f8-00df-11e7-8b0d-75364f641f58.png)
Then in you simply run the command `face_recognition`, passing in
the folder of known people and the folder (or single image) with unknown
people and it tells you who is in each image:
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
There's one line in the output for each face. The data is comma-separated
with the filename and the name of the person found.
An `unknown_person` is a face in the image that didn't match anyone in
your folder of known people.
#### `face_detection` command line tool
The `face_detection` command lets you find the location (pixel coordinatates)
of any faces in an image.
Just run the command `face_detection`, passing in a folder of images
to check (or a single image):
```bash
$ face_detection ./folder_with_pictures/
examples/image1.jpg,65,215,169,112
examples/image2.jpg,62,394,211,244
examples/image2.jpg,95,941,244,792
```
It prints one line for each face that was detected. The coordinates
reported are the top, right, bottom and left coordinates of the face (in pixels).
##### Adjusting Tolerance / Sensitivity
If you are getting multiple matches for the same person, it might be that
the people in your photos look very similar and a lower tolerance value
is needed to make face comparisons more strict.
You can do that with the `--tolerance` parameter. The default tolerance
value is 0.6 and lower numbers make face comparisons more strict:
```bash
$ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
If you want to see the face distance calculated for each match in order
to adjust the tolerance setting, you can use `--show-distance true`:
```bash
$ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None
```
##### More Examples
If you simply want to know the names of the people in each photograph but don't
care about file names, you could do this:
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2
Barack Obama
unknown_person
```
##### Speeding up Face Recognition
Face recognition can be done in parallel if you have a computer with
multiple CPU cores. For example, if your system has 4 CPU cores, you can
process about 4 times as many images in the same amount of time by using
all your CPU cores in parallel.
If you are using Python 3.4 or newer, pass in a `--cpus <number_of_cpu_cores_to_use>` parameter:
```bash
$ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/
```
You can also pass in `--cpus -1` to use all CPU cores in your system.
#### Python Module
You can import the `face_recognition` module and then easily manipulate
faces with just a couple of lines of code. It's super easy!
API Docs: [https://face-recognition.readthedocs.io](https://face-recognition.readthedocs.io/en/latest/face_recognition.html).
##### Automatically find all the faces in an image
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image)
# face_locations is now an array listing the co-ordinates of each face!
```
See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py)
to try it out.
You can also opt-in to a somewhat more accurate deep-learning-based face detection model.
Note: GPU acceleration (via NVidia's CUDA library) is required for good
performance with this model. You'll also want to enable CUDA support
when compliling `dlib`.
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image, model="cnn")
# face_locations is now an array listing the co-ordinates of each face!
```
See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py)
to try it out.
If you have a lot of images and a GPU, you can also
[find faces in batches](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py).
##### Automatically locate the facial features of a person in an image
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
# face_landmarks_list is now an array with the locations of each facial feature in each face.
# face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye.
```
See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py)
to try it out.
##### Recognize faces in images and identify who they are
```python
import face_recognition
picture_of_me = face_recognition.load_image_file("me.jpg")
my_face_encoding = face_recognition.face_encodings(picture_of_me)[0]
# my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face!
unknown_picture = face_recognition.load_image_file("unknown.jpg")
unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0]
# Now we can see the two face encodings are of the same person with `compare_faces`!
results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding)
if results[0] == True:
print("It's a picture of me!")
else:
print("It's not a picture of me!")
```
See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py)
to try it out.
## Python Code Examples
All the examples are available [here](https://github.com/ageitgey/face_recognition/tree/master/examples).
#### Face Detection
* [Find faces in a photograph](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py)
* [Find faces in a photograph (using deep learning)](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py)
* [Find faces in batches of images w/ GPU (using deep learning)](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py)
* [Blur all the faces in a live video using your webcam (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/blur_faces_on_webcam.py)
#### Facial Features
* [Identify specific facial features in a photograph](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py)
* [Apply (horribly ugly) digital make-up](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py)
#### Facial Recognition
* [Find and recognize unknown faces in a photograph based on photographs of known people](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py)
* [Identify and draw boxes around each person in a photo](https://github.com/ageitgey/face_recognition/blob/master/examples/identify_and_draw_boxes_on_faces.py)
* [Compare faces by numeric face distance instead of only True/False matches](https://github.com/ageitgey/face_recognition/blob/master/examples/face_distance.py)
* [Recognize faces in live video using your webcam - Simple / Slower Version (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam.py)
* [Recognize faces in live video using your webcam - Faster Version (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py)
* [Recognize faces in a video file and write out new video file (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_video_file.py)
* [Recognize faces on a Raspberry Pi w/ camera](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_on_raspberry_pi.py)
* [Run a web service to recognize faces via HTTP (Requires Flask to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/web_service_example.py)
* [Recognize faces with a K-nearest neighbors classifier](https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_knn.py)
* [Train multiple images per person then recognize faces using a SVM](https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_svm.py)
## Creating a Standalone Executable
If you want to create a standalone executable that can run without the need to install `python` or `face_recognition`, you can use [PyInstaller](https://github.com/pyinstaller/pyinstaller). However, it requires some custom configuration to work with this library. See [this issue](https://github.com/ageitgey/face_recognition/issues/357) for how to do it.
## Articles and Guides that cover `face_recognition`
- My article on how Face Recognition works: [Modern Face Recognition with Deep Learning](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78)
- Covers the algorithms and how they generally work
- [Face recognition with OpenCV, Python, and deep learning](https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/) by Adrian Rosebrock
- Covers how to use face recognition in practice
- [Raspberry Pi Face Recognition](https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/) by Adrian Rosebrock
- Covers how to use this on a Raspberry Pi
- [Face clustering with Python](https://www.pyimagesearch.com/2018/07/09/face-clustering-with-python/) by Adrian Rosebrock
- Covers how to automatically cluster photos based on who appears in each photo using unsupervised learning
## How Face Recognition Works
If you want to learn how face location and recognition work instead of
depending on a black box library, [read my article](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78).
## Caveats
* The face recognition model is trained on adults and does not work very well on children. It tends to mix
up children quite easy using the default comparison threshold of 0.6.
* Accuracy may vary between ethnic groups. Please see [this wiki page](https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems#question-face-recognition-works-well-with-european-individuals-but-overall-accuracy-is-lower-with-asian-individuals) for more details.
## <a name="deployment">Deployment to Cloud Hosts (Heroku, AWS, etc)</a>
Since `face_recognition` depends on `dlib` which is written in C++, it can be tricky to deploy an app
using it to a cloud hosting provider like Heroku or AWS.
To make things easier, there's an example Dockerfile in this repo that shows how to run an app built with
`face_recognition` in a [Docker](https://www.docker.com/) container. With that, you should be able to deploy
to any service that supports Docker images.
You can try the Docker image locally by running: `docker-compose up --build`
There are also [several prebuilt Docker images.](docker/README.md)
Linux users with a GPU (drivers >= 384.81) and [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker) installed can run the example on the GPU: Open the [docker-compose.yml](docker-compose.yml) file and uncomment the `dockerfile: Dockerfile.gpu` and `runtime: nvidia` lines.
## Having problems?
If you run into problems, please read the [Common Errors](https://github.com/ageitgey/face_recognition/wiki/Common-Errors) section of the wiki before filing a github issue.
## Thanks
* Many, many thanks to [Davis King](https://github.com/davisking) ([@nulhom](https://twitter.com/nulhom))
for creating dlib and for providing the trained facial feature detection and face encoding models
used in this library. For more information on the ResNet that powers the face encodings, check out
his [blog post](http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html).
* Thanks to everyone who works on all the awesome Python data science libraries like numpy, scipy, scikit-image,
pillow, etc, etc that makes this kind of stuff so easy and fun in Python.
* Thanks to [Cookiecutter](https://github.com/audreyr/cookiecutter) and the
[audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) project template
for making Python project packaging way more tolerable.

View File

@ -0,0 +1,492 @@
Face Recognition
================
| Recognize and manipulate faces from Python or from the command line
with
| the world's simplest face recognition library.
| Built using `dlib <http://dlib.net/>`__'s state-of-the-art face
recognition
| built with deep learning. The model has an accuracy of 99.38% on the
| `Labeled Faces in the Wild <http://vis-www.cs.umass.edu/lfw/>`__
benchmark.
| This also provides a simple ``face_recognition`` command line tool
that lets
| you do face recognition on a folder of images from the command line!
| |PyPI|
| |Build Status|
| |Documentation Status|
Features
--------
Find faces in pictures
^^^^^^^^^^^^^^^^^^^^^^
Find all the faces that appear in a picture:
|image3|
.. code:: python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
Find and manipulate facial features in pictures
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Get the locations and outlines of each person's eyes, nose, mouth and
chin.
|image4|
.. code:: python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
| Finding facial features is super useful for lots of important stuff.
But you can also use for really stupid stuff
| like applying `digital
make-up <https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py>`__
(think 'Meitu'):
|image5|
Identify faces in pictures
^^^^^^^^^^^^^^^^^^^^^^^^^^
Recognize who appears in each photo.
|image6|
.. code:: python
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
You can even use this library with other Python libraries to do
real-time face recognition:
|image7|
See `this
example <https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py>`__
for the code.
Installation
------------
Requirements
^^^^^^^^^^^^
- Python 3.3+ or Python 2.7
- macOS or Linux (Windows not officially supported, but might work)
Installing on Mac or Linux
^^^^^^^^^^^^^^^^^^^^^^^^^^
First, make sure you have dlib already installed with Python bindings:
- `How to install dlib from source on macOS or
Ubuntu <https://gist.github.com/ageitgey/629d75c1baac34dfa5ca2a1928a7aeaf>`__
Then, install this module from pypi using ``pip3`` (or ``pip2`` for
Python 2):
.. code:: bash
pip3 install face_recognition
| If you are having trouble with installation, you can also try out a
| `pre-configured
VM <https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b>`__.
Installing on Raspberry Pi 2+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- `Raspberry Pi 2+ installation
instructions <https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65>`__
Installing on Windows
^^^^^^^^^^^^^^^^^^^^^
While Windows isn't officially supported, helpful users have posted
instructions on how to install this library:
- `@masoudr's Windows 10 installation guide (dlib +
face\_recognition) <https://github.com/ageitgey/face_recognition/issues/175#issue-257710508>`__
Installing a pre-configured Virtual Machine image
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- `Download the pre-configured VM
image <https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b>`__
(for VMware Player or VirtualBox).
Usage
-----
Command-Line Interface
^^^^^^^^^^^^^^^^^^^^^^
| When you install ``face_recognition``, you get a simple command-line
program
| called ``face_recognition`` that you can use to recognize faces in a
| photograph or folder full for photographs.
| First, you need to provide a folder with one picture of each person
you
| already know. There should be one image file for each person with the
| files named according to who is in the picture:
|known|
Next, you need a second folder with the files you want to identify:
|unknown|
| Then in you simply run the command ``face_recognition``, passing in
| the folder of known people and the folder (or single image) with
unknown
| people and it tells you who is in each image:
.. code:: bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
| There's one line in the output for each face. The data is
comma-separated
| with the filename and the name of the person found.
| An ``unknown_person`` is a face in the image that didn't match anyone
in
| your folder of known people.
Adjusting Tolerance / Sensitivity
'''''''''''''''''''''''''''''''''
| If you are getting multiple matches for the same person, it might be
that
| the people in your photos look very similar and a lower tolerance
value
| is needed to make face comparisons more strict.
| You can do that with the ``--tolerance`` parameter. The default
tolerance
| value is 0.6 and lower numbers make face comparisons more strict:
.. code:: bash
$ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
| If you want to see the face distance calculated for each match in
order
| to adjust the tolerance setting, you can use ``--show-distance true``:
.. code:: bash
$ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None
More Examples
'''''''''''''
| If you simply want to know the names of the people in each photograph
but don't
| care about file names, you could do this:
.. code:: bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2
Barack Obama
unknown_person
Speeding up Face Recognition
''''''''''''''''''''''''''''
| Face recognition can be done in parallel if you have a computer with
| multiple CPU cores. For example if your system has 4 CPU cores, you
can
| process about 4 times as many images in the same amount of time by
using
| all your CPU cores in parallel.
If you are using Python 3.4 or newer, pass in a
``--cpus <number_of_cpu_cores_to_use>`` parameter:
.. code:: bash
$ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/
You can also pass in ``--cpus -1`` to use all CPU cores in your system.
Python Module
^^^^^^^^^^^^^
| You can import the ``face_recognition`` module and then easily
manipulate
| faces with just a couple of lines of code. It's super easy!
API Docs:
`https://face-recognition.readthedocs.io <https://face-recognition.readthedocs.io/en/latest/face_recognition.html>`__.
Automatically find all the faces in an image
''''''''''''''''''''''''''''''''''''''''''''
.. code:: python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image)
# face_locations is now an array listing the co-ordinates of each face!
| See `this
example <https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py>`__
| to try it out.
You can also opt-in to a somewhat more accurate deep-learning-based face
detection model.
| Note: GPU acceleration (via nvidia's CUDA library) is required for
good
| performance with this model. You'll also want to enable CUDA support
| when compliling ``dlib``.
.. code:: python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image, model="cnn")
# face_locations is now an array listing the co-ordinates of each face!
| See `this
example <https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py>`__
| to try it out.
| If you have a lot of images and a GPU, you can also
| `find faces in
batches <https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py>`__.
Automatically locate the facial features of a person in an image
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
.. code:: python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
# face_landmarks_list is now an array with the locations of each facial feature in each face.
# face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye.
| See `this
example <https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py>`__
| to try it out.
Recognize faces in images and identify who they are
'''''''''''''''''''''''''''''''''''''''''''''''''''
.. code:: python
import face_recognition
picture_of_me = face_recognition.load_image_file("me.jpg")
my_face_encoding = face_recognition.face_encodings(picture_of_me)[0]
# my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face!
unknown_picture = face_recognition.load_image_file("unknown.jpg")
unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0]
# Now we can see the two face encodings are of the same person with `compare_faces`!
results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding)
if results[0] == True:
print("It's a picture of me!")
else:
print("It's not a picture of me!")
| See `this
example <https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py>`__
| to try it out.
Python Code Examples
--------------------
All the examples are available
`here <https://github.com/ageitgey/face_recognition/tree/master/examples>`__.
Face Detection
^^^^^^^^^^^^^^
- `Find faces in a
photograph <https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py>`__
- `Find faces in a photograph (using deep
learning) <https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py>`__
- `Find faces in batches of images w/ GPU (using deep
learning) <https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py>`__
Facial Features
^^^^^^^^^^^^^^^
- `Identify specific facial features in a
photograph <https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py>`__
- `Apply (horribly ugly) digital
make-up <https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py>`__
Facial Recognition
^^^^^^^^^^^^^^^^^^
- `Find and recognize unknown faces in a photograph based on
photographs of known
people <https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py>`__
- `Compare faces by numeric face distance instead of only True/False
matches <https://github.com/ageitgey/face_recognition/blob/master/examples/face_distance.py>`__
- `Recognize faces in live video using your webcam - Simple / Slower
Version (Requires OpenCV to be
installed) <https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam.py>`__
- `Recognize faces in live video using your webcam - Faster Version
(Requires OpenCV to be
installed) <https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py>`__
- `Recognize faces in a video file and write out new video file
(Requires OpenCV to be
installed) <https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_video_file.py>`__
- `Recognize faces on a Raspberry Pi w/
camera <https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_on_raspberry_pi.py>`__
- `Run a web service to recognize faces via HTTP (Requires Flask to be
installed) <https://github.com/ageitgey/face_recognition/blob/master/examples/web_service_example.py>`__
- `Recognize faces with a K-nearest neighbors
classifier <https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_knn.py>`__
.. rubric:: How Face Recognition Works
:name: how-face-recognition-works
| If you want to learn how face location and recognition work instead of
| depending on a black box library, `read my
article <https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78>`__.
Caveats
-------
- The face recognition model is trained on adults and does not work
very well on children. It tends to mix
up children quite easy using the default comparison threshold of 0.6.
Deployment to Cloud Hosts (Heroku, AWS, etc)
--------------------------------------------
| Since ``face_recognition`` depends on ``dlib`` which is written in
C++, it can be tricky to deploy an app
| using it to a cloud hosting provider like Heroku or AWS.
| To make things easier, there's an example Dockerfile in this repo that
shows how to run an app built with
| ``face_recognition`` in a `Docker <https://www.docker.com/>`__
container. With that, you should be able to deploy
| to any service that supports Docker images.
Common Issues
-------------
Issue: ``Illegal instruction (core dumped)`` when using
face\_recognition or running examples.
| Solution: ``dlib`` is compiled with SSE4 or AVX support, but your CPU
is too old and doesn't support that.
| You'll need to recompile ``dlib`` after `making the code change
outlined
here <https://github.com/ageitgey/face_recognition/issues/11#issuecomment-287398611>`__.
Issue:
``RuntimeError: Unsupported image type, must be 8bit gray or RGB image.``
when running the webcam examples.
Solution: Your webcam probably isn't set up correctly with OpenCV. `Look
here for
more <https://github.com/ageitgey/face_recognition/issues/21#issuecomment-287779524>`__.
Issue: ``MemoryError`` when running ``pip2 install face_recognition``
| Solution: The face\_recognition\_models file is too big for your
available pip cache memory. Instead,
| try ``pip2 --no-cache-dir install face_recognition`` to avoid the
issue.
Issue:
``AttributeError: 'module' object has no attribute 'face_recognition_model_v1'``
Solution: The version of ``dlib`` you have installed is too old. You
need version 19.7 or newer. Upgrade ``dlib``.
Issue:
``Attribute Error: 'Module' object has no attribute 'cnn_face_detection_model_v1'``
Solution: The version of ``dlib`` you have installed is too old. You
need version 19.7 or newer. Upgrade ``dlib``.
Issue: ``TypeError: imread() got an unexpected keyword argument 'mode'``
Solution: The version of ``scipy`` you have installed is too old. You
need version 0.17 or newer. Upgrade ``scipy``.
Thanks
------
- Many, many thanks to `Davis King <https://github.com/davisking>`__
(`@nulhom <https://twitter.com/nulhom>`__)
for creating dlib and for providing the trained facial feature
detection and face encoding models
used in this library. For more information on the ResNet that powers
the face encodings, check out
his `blog
post <http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html>`__.
- Thanks to everyone who works on all the awesome Python data science
libraries like numpy, scipy, scikit-image,
pillow, etc, etc that makes this kind of stuff so easy and fun in
Python.
- Thanks to `Cookiecutter <https://github.com/audreyr/cookiecutter>`__
and the
`audreyr/cookiecutter-pypackage <https://github.com/audreyr/cookiecutter-pypackage>`__
project template
for making Python project packaging way more tolerable.
.. |PyPI| image:: https://img.shields.io/pypi/v/face_recognition.svg
:target: https://pypi.python.org/pypi/face_recognition
.. |Build Status| image:: https://travis-ci.org/ageitgey/face_recognition.svg?branch=master
:target: https://travis-ci.org/ageitgey/face_recognition
.. |Documentation Status| image:: https://readthedocs.org/projects/face-recognition/badge/?version=latest
:target: http://face-recognition.readthedocs.io/en/latest/?badge=latest
.. |image3| image:: https://cloud.githubusercontent.com/assets/896692/23625227/42c65360-025d-11e7-94ea-b12f28cb34b4.png
.. |image4| image:: https://cloud.githubusercontent.com/assets/896692/23625282/7f2d79dc-025d-11e7-8728-d8924596f8fa.png
.. |image5| image:: https://cloud.githubusercontent.com/assets/896692/23625283/80638760-025d-11e7-80a2-1d2779f7ccab.png
.. |image6| image:: https://cloud.githubusercontent.com/assets/896692/23625229/45e049b6-025d-11e7-89cc-8a71cf89e713.png
.. |image7| image:: https://cloud.githubusercontent.com/assets/896692/24430398/36f0e3f0-13cb-11e7-8258-4d0c9ce1e419.gif
.. |known| image:: https://cloud.githubusercontent.com/assets/896692/23582466/8324810e-00df-11e7-82cf-41515eba704d.png
.. |unknown| image:: https://cloud.githubusercontent.com/assets/896692/23582465/81f422f8-00df-11e7-8b0d-75364f641f58.png

View File

@ -0,0 +1,365 @@
# Face Recognition
_このファイルは [英語(オリジナル) in English](https://github.com/ageitgey/face_recognition/blob/master/README.md)、 [中国語 简体中文版](https://github.com/ageitgey/face_recognition/blob/master/README_Simplified_Chinese.md) 、 [韓国語 한국어](https://github.com/ageitgey/face_recognition/blob/master/README_Korean.md)で読むこともできます。_
世界で最もシンプルな顔認識ライブラリを使って、Pythonやコマンドラインで顔を認識・操作することができるライブラリです。
[dlib](http://dlib.net/)のディープラーニングを用いた最先端の顔認識を使用して構築されており、このモデルは[Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/)ベンチマークにて99.38%の正解率を記録しています。
シンプルな`face_recognition`コマンドラインツールも用意しており、コマンドラインでフォルダ内の画像を顔認識することもできます。
[![PyPI](https://img.shields.io/pypi/v/face_recognition.svg)](https://pypi.python.org/pypi/face_recognition)
[![Build Status](https://travis-ci.org/ageitgey/face_recognition.svg?branch=master)](https://travis-ci.org/ageitgey/face_recognition)
[![Documentation Status](https://readthedocs.org/projects/face-recognition/badge/?version=latest)](http://face-recognition.readthedocs.io/en/latest/?badge=latest)
## 特徴
#### 画像から顔を探す
画像に写っているすべての顔を探します。
![](https://cloud.githubusercontent.com/assets/896692/23625227/42c65360-025d-11e7-94ea-b12f28cb34b4.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
```
#### 画像から顔の特徴を取得する
画像の中の顔から目、鼻、口、あごの場所と輪郭を得ることができます。
![](https://cloud.githubusercontent.com/assets/896692/23625282/7f2d79dc-025d-11e7-8728-d8924596f8fa.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
```
顔の特徴を見つけることは多くの重要なことに役立ちますが、[デジタルメイクアップ](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py) のようにさほど重要ではないことにも使うことができます。
![](https://cloud.githubusercontent.com/assets/896692/23625283/80638760-025d-11e7-80a2-1d2779f7ccab.png)
#### 画像の中の顔を特定する
それぞれの画像に写っている人物を認識します。
![](https://cloud.githubusercontent.com/assets/896692/23625229/45e049b6-025d-11e7-89cc-8a71cf89e713.png)
```python
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
```
他のPythonライブラリと一緒に用いてリアルタイムに顔認識することも可能です。
![](https://cloud.githubusercontent.com/assets/896692/24430398/36f0e3f0-13cb-11e7-8258-4d0c9ce1e419.gif)
試す場合は[こちらのサンプルコード](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py) を参照してください。
## デモ
ユーザーがコントリビュートした共有のJupyter notebookのデモがあります。公式なサポートはありません[![Deepnote](https://beta.deepnote.org/buttons/try-in-a-jupyter-notebook.svg)](https://beta.deepnote.org/launch?template=face_recognition)
## インストール
### 必要なもの
* Python 3.3+ もしくは Python 2.7
* macOS もしくは Linux (Windowsは公式にはサポートしていませんが動くかもしれません)
### インストールオプション:
#### MacもしくはLinuxにインストール
はじめに、dlibをインストールします。Pythonの拡張機能も有効にします
* [macOSもしくはUbuntuにdlibをソースコードからインストールする方法](https://gist.github.com/ageitgey/629d75c1baac34dfa5ca2a1928a7aeaf)
次に、このモジュールをpypiから`pip3`Python2の場合は`pip2`)を使ってインストールします。
```bash
pip3 install face_recognition
```
あるいは、[Docker](https://www.docker.com/)でこのライブラリを試すこともできます。詳しくは [こちらのセクション](#deployment)を参照してください。
もし、インストールが上手くいかない場合は、すでに用意されているVMイメージで試すこともできます。詳しくは[事前構成済みのVM](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b)を参照してください。(VMware Player もしくは VirtualBoxが対象)
#### Nvidia Jetson Nanoボードにインストール
* [Jetson Nanoインストール手順](https://medium.com/@ageitgey/build-a-hardware-based-face-recognition-system-for-150-with-the-nvidia-jetson-nano-and-python-a25cb8c891fd)
* この記事の手順通りにインストールを行ってください。現在、Jetson NanoのCUDAライブラリにはバグがあり、記事の手順通りにdlibの一行をコメントアウトし再コンパイルしないと失敗する恐れがあります。
#### Raspberry Pi 2+にインストール
* [Raspberry Pi 2+インストール手順](https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65)
#### Windowsにインストール
Windowsは公式サポートされていませんが、役立つインストール手順が投稿されています。
* [@masoudr's Windows 10 インストールガイド (dlib + face_recognition)](https://github.com/ageitgey/face_recognition/issues/175#issue-257710508)
<!--
#### Installing a pre-configured Virtual Machine image
* [Download the pre-configured VM image](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b) (for VMware Player or VirtualBox). -->
## 使用方法
### コマンドライン
`face_recognition`をインストールすると、2つのシンプルなコマンドラインがついてきます。
* `face_recognition` - 画像もしくはフォルダの中の複数の画像から顔を認識します
* `face_detection` - 画像もしくはフォルダの中の複数の画像から顔を検出します
#### `face_recognition` コマンドラインツール
`face_recognition` コマンドによって、画像もしくはフォルダの中の複数の画像から顔を認識することができます。
まずは、フォルダに知っている人たちの画像を一枚ずつ入れます。一人につき1枚の画像ファイルを用意し、画像のファイル名はその画像に写っている人物の名前にします。
![知っている人](https://cloud.githubusercontent.com/assets/896692/23582466/8324810e-00df-11e7-82cf-41515eba704d.png)
次に、2つ目のフォルダに特定したい画像を入れます。
![知らない人](https://cloud.githubusercontent.com/assets/896692/23582465/81f422f8-00df-11e7-8b0d-75364f641f58.png)
そして、`face_recognition`コマンドを実行し、知っている人の画像を入れたフォルダのパスと特定したい画像のフォルダ(もしくは画像ファイル)のパスを渡すと、それぞれの画像に誰がいるのかが分かります。
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
一つの顔につき一行が出力され、ファイル名と特定した人物の名前がカンマ区切りで表示されます。
`unknown_person`は知っている人の画像の中の誰ともマッチしなかった顔です。
#### `face_detection` コマンドラインツール
`face_detection` コマンドによって、画像の中にある顔の位置(ピクセル座標)を検出することができます。
`face_detection` コマンドを実行し、顔を検出したい画像を入れたフォルダ(もしくは画像ファイル)のパスを渡してあげるだけです。
```bash
$ face_detection ./folder_with_pictures/
examples/image1.jpg,65,215,169,112
examples/image2.jpg,62,394,211,244
examples/image2.jpg,95,941,244,792
```
検出された顔一つにつき一行が出力され、顔の上・右・下・左の座標(ピクセル単位)が表示されます。
##### 許容誤差の調整 / 感度
もし同一人物に対して複数の一致があった場合、画像の中に写っている人たちの顔が非常に似ている可能性があるので、顔の比較をより厳しくするために許容誤差の値を下げる必要があります。
`--tolerance` コマンドによってそれが可能になります。デフォルトの許容誤差の値tolerance valueを0.6よりも低くすると、より厳密に顔の比較をすることができます。
```bash
$ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
もし許容誤差の設定を調整するために一致した顔の距離値face distanceを確認したい場合は `--show-distance true` を使ってください。
```bash
$ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None
```
##### その他の例
ファイル名は出力せずに人物の名前だけを表示することもできます。
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2
Barack Obama
unknown_person
```
##### Face Recognition の高速化
マルチコア搭載コンピューターの場合は並列で実行することも可能です。例えば4CPUコアの場合、同じ時間で約4倍の画像を処理することができます。
Python 3.4 以上を使っている場合は`--cpus <number_of_cpu_cores_to_use>` パラメータを渡します。
```bash
$ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/
```
`--cpus -1` のパラメータを渡すことで、システムのすべてのCPUコアを使うことも可能です。
#### Pythonモジュール
`face_recognition` モジュールをインポートすると、数行のコードでとても簡単に操作を行うことができます。
API Docs: [https://face-recognition.readthedocs.io](https://face-recognition.readthedocs.io/en/latest/face_recognition.html).
##### 自動的に画像の中のすべての顔を見つける
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image)
# face_locations is now an array listing the co-ordinates of each face!
```
試す場合は[こちらのサンプルコード](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py)を参照してください。
さらに正確でディープラーニングをもとにした顔検出モデルを選択することも可能です。
注意このモデルで良いパフォーマンスを出すにはGPUアクセラレーションNVidiaのCUDAライブラリ経由が必要です。また、`dlib` をコンパイルする際にCUDAサポートを有効にする必要あります。
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image, model="cnn")
# face_locations is now an array listing the co-ordinates of each face!
```
試す場合は[こちらのサンプルコード](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py)を参照してください。
大量の画像をGPUを使って処理する場合は、[こちらのサンプルコード](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py)のようにバッチ処理することも可能です。
##### 自動的に画像の中の顔特徴を見つける
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
# face_landmarks_list is now an array with the locations of each facial feature in each face.
# face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye.
```
試す場合は[こちらのサンプルコード](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py)を参照してください。
##### 画像の中の顔を認識し、その人物を特定する
```python
import face_recognition
picture_of_me = face_recognition.load_image_file("me.jpg")
my_face_encoding = face_recognition.face_encodings(picture_of_me)[0]
# my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face!
unknown_picture = face_recognition.load_image_file("unknown.jpg")
unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0]
# Now we can see the two face encodings are of the same person with `compare_faces`!
results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding)
if results[0] == True:
print("It's a picture of me!")
else:
print("It's not a picture of me!")
```
試す場合は[こちらのサンプルコード](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py)を参照してください。
## Pythonコードのサンプル
すべてのサンプルは[こちら](https://github.com/ageitgey/face_recognition/tree/master/examples)で見ることができます。
#### 顔検出
* [画像から顔を見つける](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py)
* [画像から顔を見つける(ディープラーニングを使用する)](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py)
* [大量の画像からGPUを用いて顔を見つけるディープラーニングを使用する](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py)
* [WEBカメラによるライブ動画のすべての顔をぼかす(OpenCVのインストールが必要)](https://github.com/ageitgey/face_recognition/blob/master/examples/blur_faces_on_webcam.py)
#### 顔の特徴
* [画像から顔の特徴を特定する](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py)
* [デジタルメイクアップを施す](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py)
#### 顔認識
* [知っている人の画像をもとに画像の中の知らない顔を発見する](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py)
* [画像の中の顔を四角で囲む](https://github.com/ageitgey/face_recognition/blob/master/examples/identify_and_draw_boxes_on_faces.py)
* [顔の距離値face distanceによって比較する](https://github.com/ageitgey/face_recognition/blob/master/examples/face_distance.py)
* [WEBカメラによるライブ動画で顔認識する シンプル/低速バージョン (OpenCVのインストールが必要)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam.py)
* [WEBカメラによるライブ動画で顔認識する - 高速バージョン (OpenCVのインストールが必要)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py)
* [動画ファイルを顔認識して新しいファイルに書き出す (OpenCVのインストールが必要)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_video_file.py)
* [カメラ付きのRaspberry Piによって顔認識する](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_on_raspberry_pi.py)
* [顔認識ウェブサービスをHTTP経由で実行する(Flaskのインストールが必要)](https://github.com/ageitgey/face_recognition/blob/master/examples/web_service_example.py)
* [k近傍法で顔認識する](https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_knn.py)
* [人物ごとに複数の画像をトレーニングし、SVMサポートベクターマシンを用いて顔認識する](https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_svm.py)
## スタンドアロンの実行ファイルの作成
`python``face_recognition`のインストールをせずに実行することができるスタンドアロンの実行ファイルを作る場合は、[PyInstaller](https://github.com/pyinstaller/pyinstaller)を使います。しかし、このライブラリを使用するにはカスタム設定が必要です。
## `face_recognition`をカバーする記事とガイド
- 顔認識の仕組みについての記事: [ディープラーニングによる最新の顔認識](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78)
- アルゴリズムとそれらがどのように動くかを取り上げています。
- Adrian Rosebrock氏の [OpenCV、Python、ディープラーニングによる顔認識](https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/)
- 実際に顔認識を使用する方法について取り上げています。
- Adrian Rosebrock氏の [Raspberry Pi 顔認識](https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/)
- Raspberry Piで使用する方法について取り上げています。
- Adrian Rosebrock氏の [Pythonによる顔のクラスタリング](https://www.pyimagesearch.com/2018/07/09/face-clustering-with-python/)
- それぞれの画像に出現する人物に基づき、教師なし学習を用いて自動的に画像をクラスター化する方法について取り上げています。
## 顔認識の仕組み
ブラックボックスライブラリに依存せず、顔の位置や認識の仕組みを知りたい方は[こちらの記事](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78)を読んでください。
## 注意事項
* この顔認識モデルは大人でトレーニングされており、子どもではあまり上手く機能しません。比較する閾値をデフォルト0.6)のままで使用すると子どもを混同しやすくなります。
* 精度は民族グループによって異なる可能性があります。詳しくは[こちらのwikiページ](https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems#question-face-recognition-works-well-with-european-individuals-but-overall-accuracy-is-lower-with-asian-individuals)を参照してください。
## <a name="deployment">クラウドにデプロイ (Heroku, AWSなど)</a>
`face_recognition`はC++で書かれた`dlib`に依存しているため、HerokuやAWSのようなクラウドサーバにこれらを使ったアプリをデプロイするのは難しい場合があります。
それを簡単にするために、このレポジトリには[Docker](https://www.docker.com/)コンテナ内で`face_recognition`のビルドされたアプリを実行する方法を示したサンプルDockerfileがあります。これによって、Dockerイメージをサポートしているすべてのサービスにデプロイできるようになるはずです。
コマンドを実行し、ローカルでDockerイメージを試すことができます。: `docker-compose up --build`
GPU (drivers >= 384.81) および [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker) がインストールされているLinuxユーザーはGPUでサンプルを実行することができます。[docker-compose.yml](docker-compose.yml) を開き、`dockerfile: Dockerfile.gpu`と`runtime: nvidia`の行をコメントアウトしてください。
## なにか問題が発生したら
もし問題が発生した場合はGitHubにIssueをあげる前に、まずはwikiの[よくあるエラー](https://github.com/ageitgey/face_recognition/wiki/Common-Errors)をお読みください
## 謝意
* dlibを作り、このライブラリで使っているトレーニングされた顔の特徴検出とフェイスエンコーディングモデルを提供してくれた[Davis King](https://github.com/davisking) ([@nulhom](https://twitter.com/nulhom))、本当にありがとうございます。
フェイスエンコーディングを動かしているResNetについての情報は彼の[ブログ](http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html)を見てください。
* このようなライブラリがPythonで簡単に楽しくできるためのnumpy, scipy, scikit-image, pillow など全ての素晴らしいPythonデータサイエンスライブラリに取り組んでいる人たちに感謝しています。
* Pythonプロジェクトのパッケージングをより易しくする[Cookiecutter](https://github.com/audreyr/cookiecutter)と[audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage)に感謝しています。

View File

@ -0,0 +1,354 @@
# Face Recognition
본 문서는 _[중국어 简体中文版](https://github.com/ageitgey/face_recognition/blob/master/README_Simplified_Chinese.md) 로부터 번역되어 한국 사용자들의 기여를 통해 만들어진 문서입니다.
본 라이브러리는 세계에서 가장 간단한 얼굴 인식 라이브러리로, Python 또는 명령 줄(CLI)에서 얼굴을 인식하고 조작해 볼 수 있습니다.
본 라이브러리는 딥러닝 기반으로 제작된 [dlib](http://dlib.net/)의 최첨단 얼굴 인식 기능을 사용하여 구축되었습니다. 이 모델은 [Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/) 기준으로 99.38%의 정확도를 가집니다.
또한, 명령 줄(CLI)에서 이미지 폴더 안에 있는 얼굴 인식 기능을 위한 간단한 `face_recognition` 도구를 제공합니다!
[![PyPI](https://img.shields.io/pypi/v/face_recognition.svg)](https://pypi.python.org/pypi/face_recognition)
[![Build Status](https://travis-ci.org/ageitgey/face_recognition.svg?branch=master)](https://travis-ci.org/ageitgey/face_recognition)
[![Documentation Status](https://readthedocs.org/projects/face-recognition/badge/?version=latest)](http://face-recognition.readthedocs.io/en/latest/?badge=latest)
## 특징
#### 사진에서 얼굴 찾기
사진에 등장하는 모든 얼굴들을 찾습니다:
![](https://cloud.githubusercontent.com/assets/896692/23625227/42c65360-025d-11e7-94ea-b12f28cb34b4.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
```
#### 사진에 있는 얼굴의 특징을 찾기&조작하기
각각의 사람의 눈, 코, 입, 턱의 위치와 윤곽을 잡아냅니다.
![](https://cloud.githubusercontent.com/assets/896692/23625282/7f2d79dc-025d-11e7-8728-d8924596f8fa.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
```
얼굴의 특징을 찾는 기능은 여러 중요한 일들에 유용하게 쓰입니다. 예를 들어 [디지털 메이크업](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py) (Meitu 같은 것)을 적용하는 것과 같은 정말 멍청한 것들에도 쓰일 수 있습니다:
![](https://cloud.githubusercontent.com/assets/896692/23625283/80638760-025d-11e7-80a2-1d2779f7ccab.png)
#### 사진 속 얼굴의 신원 확인하기
각각의 사진에서 누가 등장하였는지 인식합니다.
![](https://cloud.githubusercontent.com/assets/896692/23625229/45e049b6-025d-11e7-89cc-8a71cf89e713.png)
```python
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
```
이 라이브러리를 다른 Python 라이브러리와 함께 사용한다면 실시간 얼굴 인식도 가능합니다:
![](https://cloud.githubusercontent.com/assets/896692/24430398/36f0e3f0-13cb-11e7-8258-4d0c9ce1e419.gif)
코드에 대해서는 [이 예제](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py) 를 참조하십시오.
## 온라인 데모
실제 사용자가 공유한 Jupyter notebook demo (공식은 아닙니다): [![Deepnote](https://beta.deepnote.org/buttons/try-in-a-jupyter-notebook.svg)](https://beta.deepnote.org/launch?template=face_recognition)
## 설치
### 요구 사항
* Python 3.3+ 또는 Python 2.7
* macOS 또는 Linux (Windows는 공식적으로 지원하지 않으나, 작동할 수도 있음)
### 설치 옵션들:
#### Mac 또는 Linux에서의 설치
우선, Python 바인딩을 통해 dlib이 이미 설치가 되어있는지를 확인해야 합니다:
* [macOS 또는 Ubuntu에서 소스에서 dlib을 설치하는 방법](https://gist.github.com/ageitgey/629d75c1baac34dfa5ca2a1928a7aeaf)
다음으로, `pip3` (또는 Python2의 경우 `pip2`)을 사용하여 pypi에서의 모듈을 설치하십시오:
```bash
pip3 install face_recognition
```
또는, [이 부분](#deployment)을 참조하여, [Docker](https://www.docker.com/)로 이 라이브러리를 시도해보십시오.
설치에 대해 문제가 발생하였으면, [미리 구성된 VM](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b)을 사용해 볼 수도 있습니다.
#### Raspberry Pi 2+에서의 설치
* [Raspberry Pi 2+ 설치 설명서](https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65)
#### Windows에서 설치하기
Windows는 공식적으로 지원하지는 않지만, 친절한 유저들이 이 라이브러리를 어떻게 설치하는지 설명서를 작성했습니다:
* [@masoudr의 Windows 10 설치 가이드 (dlib + face_recognition)](https://github.com/ageitgey/face_recognition/issues/175#issue-257710508)
#### 미리 구성된 가상머신 이미지(VM)를 설치하기
* [미리 구성된 VM 이미지를 다운로드하기](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b) (VMware Player 또는 VirtualBox용).
## 사용법
### 명령 줄 인터페이스
`face_recognition`을 설치하면, 두 가지 간단한 명령 줄(CLI) 프로그램을 얻습니다:
* `face_recognition` - 사진 혹은 사진이 들어있는 폴더에서, 얼굴을 인식합니다.
* `face_detection` - 사진 혹은 사진이 들어있는 폴더에서, 얼굴을 찾습니다.
#### `face_recognition` 명령 줄 도구
`face_recognition` 명령을 사용하면 사진 혹은 사진이 들어있는 폴더에서, 얼굴을 인식할 수 있습니다.
그러기 위해서는 먼저, 이미 알고 있는(인식하고자 하는) 각 사람의 사진 한 장이 폴더에 있어야 합니다. 그리고 사진 속 그 사람의 이름을 딴 이미지 파일이 각각 하나씩 있어야 합니다:
![known](https://cloud.githubusercontent.com/assets/896692/23582466/8324810e-00df-11e7-82cf-41515eba704d.png)
다음으로, 식별하고 싶은 파일들이 있는 두 번째 폴더가 필요합니다:
![unknown](https://cloud.githubusercontent.com/assets/896692/23582465/81f422f8-00df-11e7-8b0d-75364f641f58.png)
그런 다음, 알고 있는 사람의 폴더와 모르는 사람의 폴더(또는 단일 이미지)를 전달하는 `face_recognition` 명령을 실행하면, 각 이미지에 있는 사람이 누군지 알 수 있습니다:
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
각각의 얼굴의 결과는 한 줄로 나타납니다. 각 줄은 파일 이름과 식별된 결과인 사람 이름이 쉼표로 구분되어 나타납니다.
`unknown_person`은 이미지 속에 알고 있는 사람의 폴더에 있는 그 누구와도 일치하지 않는 얼굴임을 의미합니다.
#### `face_detection` 명령 줄 도구
`face_detection` 명령을 사용하면 이미지에서 얼굴의 위치 (픽셀 좌표)를 찾을 수 있습니다.
`face_detection` 명령을 실행하여 검사 할 이미지 폴더 (또는 단일 이미지)를 전달하십시오:
```bash
$ face_detection ./folder_with_pictures/
examples/image1.jpg,65,215,169,112
examples/image2.jpg,62,394,211,244
examples/image2.jpg,95,941,244,792
```
감지된 각 얼굴에 대해 한 줄씩 인쇄합니다. 결과값의 좌표는 각각 얼굴의 위쪽, 오른쪽, 아래쪽 및 왼쪽 좌표 (픽셀 단위)입니다.
##### 오차 조절 / 민감도
같은 사람에 대해 여러 개의 항목을 얻었다면, 사진에 있는 사람들이 매우 유사하게 보이기 때문이며 더욱 엄격한 얼굴 비교를 위해 낮은 허용치(tolerance value)가 필요합니다.
`--tolerance` 변수를 이용하여 이를 수행할 수 있습니다. 기본 허용치 값은 0.6이며 숫자가 낮으면 더욱 엄격한 얼굴 비교가 가능합니다:
```bash
$ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
허용치 설정을 조정하기 위해, 각 식별에서의 얼굴 거리를 알고 싶다면 `--show-distance true`를 통해 볼 수 있습니다:
```bash
$ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None
```
##### 더 많은 예제들
파일 이름은 신경 쓰지 않고 각 사진에 있는 사람들의 이름만을 알고 싶다면 다음과 같이 할 수 있습니다:
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2
Barack Obama
unknown_person
```
##### 얼굴 인식 속도 향상
여러 개의 CPU 코어가 있는 컴퓨터를 사용한다면, 얼굴 인식을 동시에 수행할 수 있습니다. 예를 들면, 4개의 CPU 코어가 있는 환경에서는, 모든 CPU 코어를 병렬로 사용하여 같은 시간 동안 약 4배의 이미지들을 처리할 수 있습니다.
Python 3.4 이상을 사용하는 경우 `--cpus <number_of_cpu_cores_to_use>` 에 매개 변수(parameter)를 전달하십시오:
```bash
$ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/
```
또한 `--cpus -1`을 전달하여 시스템의 모든 CPU 코어를 사용할 수도 있습니다.
#### Python 모듈
`face_recognition` 모듈을 불러와(import) 단 몇 줄의 코드만으로 얼굴 조작을 쉽게 할 수 있습니다. 이는 매우 간단합니다!
API 문서: [https://face-recognition.readthedocs.io](https://face-recognition.readthedocs.io/en/latest/face_recognition.html).
##### 이미지의 모든 얼굴을 자동으로 찾기
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image)
# face_locations is now an array listing the co-ordinates of each face!
```
[이 예제](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py) 를 사용하여 테스트 해 보십시오.
좀 더 정확한 딥 러닝 기반의 얼굴 탐지 모델을 채택할 수도 있습니다.
참고: 이 모델의 성능을 높이려면 (NVidia의 CUDA 라이브러리를 통한) GPU 가속이 필요합니다. 또한 `dlib`을 컴파일링할 때 CUDA 지원(support)을 활성화 할 수 있습니다.
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image, model="cnn")
# face_locations is now an array listing the co-ordinates of each face!
```
[이 예제](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py) 를 사용하여 테스트 해 보십시오.
이미지와 GPU가 둘 다 많은 경우, [얼굴을 일괄적으로 찾을](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py) 수도 있습니다.
##### 이미지에서 자동으로 사람의 얼굴 특징 찾기
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
# face_landmarks_list is now an array with the locations of each facial feature in each face.
# face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye.
```
[이 예제](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py) 를 사용하여 테스트 해 보십시오.
##### 이미지에서 얼굴을 인식하고 누구인지 식별하기
```python
import face_recognition
picture_of_me = face_recognition.load_image_file("me.jpg")
my_face_encoding = face_recognition.face_encodings(picture_of_me)[0]
# my_face_encoding은 이제 어느 얼굴과도 비교할 수 있는 내가 가진 얼굴 특징의 보편적인 인코딩을 포함하게 되었습니다.
unknown_picture = face_recognition.load_image_file("unknown.jpg")
unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0]
# 이제 `compare_faces`를 통해 두 얼굴이 같은 얼굴인지 비교할 수 있습니다!
results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding)
if results[0] == True:
print("It's a picture of me!")
else:
print("It's not a picture of me!")
```
[이 예제](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py) 를 사용하여 테스트 해 보십시오.
## Python 코드 예제
모든 예제는 [여기](https://github.com/ageitgey/face_recognition/tree/master/examples) 에 있습니다.
#### 얼굴 탐지
* [사진에서 얼굴 찾기](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py)
* [사진에서 얼굴 찾기(딥 러닝 사용)](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py)
* [이미지 모음에서 얼굴 찾기 w/ GPU (딥 러닝 사용)](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py)
* [웹캠을 사용하여 라이브 비디오의 모든 얼굴을 흐리게 처리하기 (OpenCV가 설치되어 있어야 함)](https://github.com/ageitgey/face_recognition/blob/master/examples/blur_faces_on_webcam.py)
#### 얼굴의 특징
* [사진의 특정 얼굴 특징 확인하기](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py)
* [Apply (horribly ugly) digital make-up](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py)
#### 얼굴 인식
* [알고있는 사람들의 사진을 기반으로 사진에서 알 수 없는 얼굴을 찾고 인식하기](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py)
* [사진 안의 각 사람들을 식별하고 주의에 상자를 그리기](https://github.com/ageitgey/face_recognition/blob/master/examples/identify_and_draw_boxes_on_faces.py)
* [얼굴 구분을 참/거짓 구분 대신 숫자로 비교하기](https://github.com/ageitgey/face_recognition/blob/master/examples/face_distance.py)
* [웹캠을 사용하여 라이브 비디오의 얼굴 인식하기 - 간단함 / 느린 버전 (OpenCV가 설치되어 있어야 함)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam.py)
* [웹캠을 사용하여 라이브 비디오의 얼굴 인식하기 - 빠른 버전 (OpenCV가 설치되어 있어야 함)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py)
* [비디오 파일에서 얼굴을 인식하고 새 비디오 파일을 작성하기 (OpenCV가 설치되어 있어야 함)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_video_file.py)
* [Raspberry Pi w/ camera에서의 얼굴 인식하기](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_on_raspberry_pi.py)
* [HTTP를 통해 얼굴을 인식하는 웹 서비스 실행하기 (Flask가 설치되어 있어야 함)](https://github.com/ageitgey/face_recognition/blob/master/examples/web_service_example.py)
* [K-nearest neighbors classifier를 통한 얼굴 인식하기](https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_knn.py)
## 독립적인 실행 파일 만들기
`Python`이나 `face_recognition`을 설치할 필요 없이 실행할 수 있는 독립적인 실행형 실행 파일을 만들려면 [PyInstaller](https://github.com/pyinstaller/pyinstaller) 를 사용하면 됩니다. 그러나, 이 라이브러리로 작업하려면 어느 정도의 설정 커스텀이 필요합니다. 방법에 대해서는 [이 이슈](https://github.com/ageitgey/face_recognition/issues/357) 를 참조하십시오.
## `face_recognition`을 다루는 글 및 가이드
- 얼굴 인식이 어떻게 작동하는지에 관한 글: [딥 러닝을 통한 현대적 얼굴 인식](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78)
- 알고리즘과 알고리즘이 일반적으로 어떻게 작동하는지
- Adrian Rosebrock의 [OpenCV, Python 및 딥 러닝을 통한 얼굴 인식](https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/)
- 실제로 얼굴 인식을 사용하는 법
- Adrian Rosebrock의 [Raspberry Pi 얼굴 인식](https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/)
- Raspberry Pi에서 어떻게 사용하는지
- Adrian Rosebrock의 [Python 얼굴 클러스터링](https://www.pyimagesearch.com/2018/07/09/face-clustering-with-python/) by Adrian Rosebrock
- 비지도 학습을 사용하여 각 사진에 나타나는 사람을 기반으로 사진을 자동 클러스터하는 방법
## 얼굴 인식이 작동하는지
black box 라이브러리에 의존하는 대신 얼굴 위치와 인식이 어떻게 작동하는지 알고 싶으시다면 [이 글](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78) 을 읽으십시오.
## 주의 사항
* 얼굴 인식의 모델은 성인에 대한 데이터를 통해 훈련이 되었으며 따라서 어린이의 경우에는 잘 적용이 되지 않습니다. 0.6의 기본 임계 값을 사용한다면 어린이들의 얼굴을 구분하지 못하는 경향이 있습니다.
* 소수 민족마다 정확성이 다를 수 있습니다. 자세한 내용은 [이 위키 페이지](https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems#question-face-recognition-works-well-with-european-individuals-but-overall-accuracy-is-lower-with-asian-individuals) 를 참조하십시오.
## <a name="deployment">클라우드 호스트에 배포 (Heroku, AWS, 기타 등)</a>
`face_recognition`은 C++로 작성된 `dlib`에 의존하기 때문에, Heroku 또는 AWS와 같은 클라우드 호스팅 제공 업체에 이를 사용하는 앱을 배포하는 것은 까다로울 수 있습니다.
더 쉬운 작업을 위해, [Docker](https://www.docker.com/) container에서 `face_recognition`으로 빌드 된 앱을 실행하는 방법을 보여주는 이 repo의 Dockerfile 예제가 있습니다. 이를 통해 Docker 이미지를 지원하는 모든 서비스에 배포할 수 있어야합니다.
다음을 실행하여 Docker 이미지를 로컬로 시도 할 수 있습니다: `docker-compose up --build`
GPU (드라이버 >= 384.81) 및 [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker) 가 설치된 Linux 사용자는 GPU에서 예제를 실행할 수 있습니다: [docker-compose.yml](docker-compose.yml) 파일을 열고 `dockerfile : Dockerfile.gpu``runtime : nvidia` 행의 주석 처리를 제거합니다.
## 문제가 있으십니까?
문제가 발생하면 github 문제를 제기하기 전에 위키의 [일반적인 오류](https://github.com/ageitgey/face_recognition/wiki/Common-Errors) 섹션을 읽어보십시오.
## 감사의 말
* `dlib`를 만들고 이 라이브러리에 사용된 얼굴 인식 기능과 얼굴 인코딩 모델을 제공한 [Davis King](https://github.com/davisking) ([@nulhom](https://twitter.com/nulhom)) 에게 많은 감사를 드립니다. 얼굴 인코딩을 지원하는 ResNet에 대한 자세한 내용은 [블로그 게시물](http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html) 을 확인하십시오.
* numpy, scipy, scikit-image, pillow 등의 모든 멋진 파이썬 데이터 과학 라이브러리에서 일하는 모든 사람들에게 감사합니다. 이런 종류의 것들을 파이썬에서 쉽고 재미있게 만듭니다.
* [Cookiecutter](https://github.com/audreyr/cookiecutter)
와 [audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage)
프로젝트 템플릿 덕분에 파이썬 프로젝트 패키징 방식이 더 괜찮아 졌습니다.

View File

@ -0,0 +1,377 @@
# Face Recognition 人脸识别
> 译者注:
>
> 本项目[face_recognition](https://github.com/ageitgey/face_recognition)是一个强大、简单、易上手的人脸识别开源项目,并且配备了完整的开发文档和应用案例,特别是兼容树莓派系统。
>
> 为了便于中国开发者研究学习人脸识别、贡献代码我将本项目README文件翻译成中文。
>
> 向本项目的所有贡献者致敬。
>
> 英译汉:同济大学开源软件协会 [子豪兄Tommy](https://github.com/TommyZihao)
>
> Translator's note:
>
> [face_recognition](https://github.com/ageitgey/face_recognition) is a powerful, simple and easy-to-use face recognition open source project with complete development documents and application cases, especially it is compatible with Raspberry Pi.
>
> In order to facilitate Chinese software developers to learn, make progress in face recognition development and source code contributions, I translated README file into simplified Chinese.
>
> Salute to all contributors to this project.
>
> Translator: Tommy in Tongji Univerisity Opensource Association [子豪兄Tommy](https://github.com/TommyZihao)
本项目是世界上最简洁的人脸识别库你可以使用Python和命令行工具提取、识别、操作人脸。
本项目的人脸识别是基于业内领先的C++开源库 [dlib](http://dlib.net/)中的深度学习模型,用[Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/)人脸数据集进行测试有高达99.38%的准确率。但对小孩和亚洲人脸的识别准确率尚待提升。
> [Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/)是美国麻省大学安姆斯特分校University of Massachusetts Amherst)制作的人脸数据集该数据集包含了从网络收集的13,000多张面部图像。
本项目提供了简易的`face_recognition`命令行工具,你可以用它处理整个文件夹里的图片。
[![PyPI](https://img.shields.io/pypi/v/face_recognition.svg)](https://pypi.python.org/pypi/face_recognition)
[![Build Status](https://travis-ci.org/ageitgey/face_recognition.svg?branch=master)](https://travis-ci.org/ageitgey/face_recognition)
[![Documentation Status](https://readthedocs.org/projects/face-recognition/badge/?version=latest)](http://face-recognition.readthedocs.io/en/latest/?badge=latest)
## 特性
#### 从图片里找到人脸
定位图片中的所有人脸:
![](https://cloud.githubusercontent.com/assets/896692/23625227/42c65360-025d-11e7-94ea-b12f28cb34b4.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
```
#### 识别人脸关键点
识别人脸关键点,包括眼睛、鼻子、嘴和下巴。
![](https://cloud.githubusercontent.com/assets/896692/23625282/7f2d79dc-025d-11e7-8728-d8924596f8fa.png)
```python
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
```
识别人脸关键点在很多领域都有用处,但同样你也可以把这个功能玩坏,比如本项目的 [digital make-up](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py)自动化妆案例(就像美图秀秀一样)。
![](https://cloud.githubusercontent.com/assets/896692/23625283/80638760-025d-11e7-80a2-1d2779f7ccab.png)
#### 识别图片中的人是谁
![](https://cloud.githubusercontent.com/assets/896692/23625229/45e049b6-025d-11e7-89cc-8a71cf89e713.png)
```python
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
```
你也可以配合其它的Python库比如opencv实现实时人脸检测
![](https://cloud.githubusercontent.com/assets/896692/24430398/36f0e3f0-13cb-11e7-8258-4d0c9ce1e419.gif)
看这个案例 [实时人脸检测](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py) 。
## 安装
### 环境配置
- Python 3.3+ or Python 2.7
- macOS or Linux
- Windows并不是我们官方支持的但也许也能用
### 不同操作系统的安装方法
#### 在 Mac 或者 Linux上安装本项目
First, make sure you have dlib already installed with Python bindings:
第一步安装dlib和相关Python依赖
- [如何在macOS或者Ubuntu上安装dlib](https://gist.github.com/ageitgey/629d75c1baac34dfa5ca2a1928a7aeaf)
Then, install this module from pypi using `pip3` (or `pip2` for Python 2):
```bash
pip3 install face_recognition
```
如果你遇到了幺蛾子可以用Ubuntu虚拟机安装本项目看下面这个教程。
[如何使用Adam Geitgey大神提供的Ubuntu虚拟机镜像文件安装配置虚拟机本项目已经包含在镜像中](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b).
#### 在 Mac 或者 Linux上安装本项目 2
修改你的pip镜像源为清华镜像然后使用`pip install face_recognition`,可以自动帮你安装各种依赖包括dlib。只是在安装dlib的时候可能会出问题因为dlib需要编译出现的问题一般是`gcc`或者`g++`版本的问题,所以在`pip install face_recognition`之前,可以通过在命令行键入
export CC=/usr/local/bin/gcc
export CXX=/usr/local/bin/g++
来指定你gcc和g++对应的位置,(这两句话会临时修改当前终端的环境变量/usr/local/bin/gcc对应你自己gcc或者g++所在目录)。
#### 在树莓派上安装
- [树莓派安装指南](https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65)
#### 在Windows上安装
虽然本项目官方并不支持Windows但一些大神们摸索出了在Windows上运行本项目的方法
- [@masoudr写的教程如何在Win10系统上安装 dlib库和 face_recognition项目](https://github.com/ageitgey/face_recognition/issues/175#issue-257710508)
#### 使用Ubuntu虚拟机镜像文件安装配置虚拟机本项目已经包含在这个镜像中
- [如何使用Adam Geitgey大神提供的Ubuntu虚拟机镜像文件安装配置虚拟机本项目已经包含在镜像中](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b)需要电脑中安装VMWare Player 或者 VirtualBox
## 使用方法
### 命令行界面
当你安装好了本项目,你可以使用两种命令行工具:
- `face_recognition` - 在单张图片或一个图片文件夹中认出是谁的脸。
- `face_detection` - 在单张图片或一个图片文件夹中定位人脸位置。
#### `face_recognition` 命令行工具
`face_recognition`命令行工具可以在单张图片或一个图片文件夹中认出是谁的脸。
首先,你得有一个你已经知道名字的人脸图片文件夹,一个人一张图,图片的文件名即为对应的人的名字:
![known](https://cloud.githubusercontent.com/assets/896692/23582466/8324810e-00df-11e7-82cf-41515eba704d.png)
然后,你需要第二个图片文件夹,文件夹里面是你希望识别的图片:
![unknown](https://cloud.githubusercontent.com/assets/896692/23582465/81f422f8-00df-11e7-8b0d-75364f641f58.png)
然后,你在命令行中切换到这两个文件夹所在路径,然后使用`face_recognition`命令行,传入这两个图片文件夹,然后就会输出未知图片中人的名字:
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
输出结果的每一行对应着图片中的一张脸,图片名字和对应人脸识别结果用逗号分开。
如果结果输出了`unknown_person`,那么代表这张脸没有对应上已知人脸图片文件夹中的任何一个人。
#### `face_detection` 命令行工具
`face_detection`命令行工具可以在单张图片或一个图片文件夹中定位人脸位置(输出像素点坐标)。
在命令行中使用`face_detection`,传入一个图片文件夹或单张图片文件来进行人脸位置检测:
```bash
$ face_detection ./folder_with_pictures/
examples/image1.jpg,65,215,169,112
examples/image2.jpg,62,394,211,244
examples/image2.jpg,95,941,244,792
```
输出结果的每一行都对应图片中的一张脸,输出坐标代表着这张脸的上、右、下、左像素点坐标。
##### 调整人脸识别的容错率和敏感度
如果一张脸识别出不止一个结果,那么这意味着他和其他人长的太像了(本项目对于小孩和亚洲人的人脸识别准确率有待提升)。你可以把容错率调低一些,使识别结果更加严格。
通过传入参数 `--tolerance` 来实现这个功能默认的容错率是0.6,容错率越低,识别越严格准确。
```bash
$ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
```
如果你想看人脸匹配的具体数值,可以传入参数 `--show-distance true`
```bash
$ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None
```
##### 更多的例子
如果你并不在乎图片的文件名,只想知道文件夹中的图片里有谁,可以用这个管道命令:
```bash
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2
Barack Obama
unknown_person
```
##### 加速人脸识别运算
如果你的CPU是多核的你可以通过并行运算加速人脸识别。例如如果你的CPU有四个核心那么你可以通过并行运算提升大概四倍的运算速度。
如果你使用Python3.4或更新的版本,可以传入 `--cpus <number_of_cpu_cores_to_use>` 参数:
```bash
$ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/
```
你可以传入 `--cpus -1`参数来调用cpu的所有核心。
> 子豪兄批注树莓派3B有4个CPU核心传入多核参数可以显著提升图片识别的速度亲测
#### Python 模块:`face_recognition`
在Python中你可以导入`face_recognition`模块调用我们提供的丰富的API接口用几行代码就可以轻松玩转各种人脸识别功能
API 接口文档: [https://face-recognition.readthedocs.io](https://face-recognition.readthedocs.io/en/latest/face_recognition.html)
##### 在图片中定位人脸的位置
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image)
# face_locations is now an array listing the co-ordinates of each face!
```
看 [案例:定位拜登的脸](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py)
![案例:定位拜登的脸](https://upload-images.jianshu.io/upload_images/13714448-b4ce08c6ba699c5e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
你也可以使用深度学习模型达到更加精准的人脸定位。
注意这种方法需要GPU加速通过英伟达显卡的CUDA库驱动你在编译安装`dlib`的时候也需要开启CUDA支持。
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image, model="cnn")
# face_locations is now an array listing the co-ordinates of each face!
```
看 [案例:使用卷积神经网络深度学习模型定位拜登的脸](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py)
如果你有很多图片需要识别同时又有GPU那么你可以参考这个例子[案例:使用卷积神经网络深度学习模型批量识别图片中的人脸](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py).
##### 识别单张图片中人脸的关键点
```python
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
# face_landmarks_list is now an array with the locations of each facial feature in each face.
# face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye.
```
看这个案例 [案例:提取奥巴马和拜登的面部关键点](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py)
![案例:提取奥巴马和拜登的面部关键点](https://upload-images.jianshu.io/upload_images/13714448-734e8b4f5592ed4a.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
##### 识别图片中的人是谁
```python
import face_recognition
picture_of_me = face_recognition.load_image_file("me.jpg")
my_face_encoding = face_recognition.face_encodings(picture_of_me)[0]
# my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face!
unknown_picture = face_recognition.load_image_file("unknown.jpg")
unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0]
# Now we can see the two face encodings are of the same person with `compare_faces`!
results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding)
if results[0] == True:
print("It's a picture of me!")
else:
print("It's not a picture of me!")
```
看这个案例 [案例:是奥巴马还是拜登?](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py)
## Python 案例
所有案例都在这个链接中 [也就是examples文件夹](https://github.com/ageitgey/face_recognition/tree/master/examples).
#### 人脸定位
- [案例:定位拜登的脸](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py)
- [案例:使用卷积神经网络深度学习模型定位拜登的脸](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py)
- [案例:使用卷积神经网络深度学习模型批量识别图片中的人脸](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py)
- [案例把来自网络摄像头视频里的人脸高斯模糊需要安装OpenCV](https://github.com/ageitgey/face_recognition/blob/master/examples/blur_faces_on_webcam.py)
#### 人脸关键点识别
- [案例:提取奥巴马和拜登的面部关键点](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py)
- [案例:给美国副总统拜登涂美妆](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py)
#### 人脸识别
- [案例:是奥巴马还是拜登?](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py)
- [案例:人脸识别之后在原图上画框框并标注姓名](https://github.com/ageitgey/face_recognition/blob/master/examples/identify_and_draw_boxes_on_faces.py)
- [案例:在不同精度上比较两个人脸是否属于一个人](https://github.com/ageitgey/face_recognition/blob/master/examples/face_distance.py)
- [案例:从摄像头获取视频进行人脸识别-较慢版需要安装OpenCV](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam.py)
- [案例:从摄像头获取视频进行人脸识别-较快版需要安装OpenCV](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py)
- [案例从视频文件中识别人脸并把识别结果输出为新的视频文件需要安装OpenCV](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_video_file.py)
- [案例:通过树莓派摄像头进行人脸个数统计及人脸身份识别](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_on_raspberry_pi.py)
- [案例通过浏览器HTTP访问网络服务器进行人脸识别需要安装Flask后端开发框架)](https://github.com/ageitgey/face_recognition/blob/master/examples/web_service_example.py)
- [案例基于K最近邻KNN分类算法进行人脸识别](https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_knn.py)
## 关于 `face_recognition`的文章和教程
- 本项目作者写的一篇文章 [Modern Face Recognition with Deep Learning](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78)
- 主要内容:基本算法和原理
- [Face recognition with OpenCV, Python, and deep learning](https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/) by Adrian Rosebrock
- 主要内容:如何实际使用本项目
- [Raspberry Pi Face Recognition](https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/) by Adrian Rosebrock
- 主要内容:如何在树莓派上使用本项目
- [Face clustering with Python](https://www.pyimagesearch.com/2018/07/09/face-clustering-with-python/) by Adrian Rosebrock
- 主要内容:使用非监督学习算法实现把图片中的人脸高斯模糊
## 人脸识别的原理
如果你想更深入了解人脸识别这个黑箱的原理 [读这篇文章](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78)。
> 子豪兄批注:一定要看这篇文章,讲的既有趣又有料。
## 警告说明
- 本项目的人脸识别模型是基于成年人的在孩子身上效果可能一般。如果图片中有孩子的话建议把临界值设为0.6.
- 不同人种的识别结果可能不同, [看wiki百科页面](https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems#question-face-recognition-works-well-with-european-individuals-but-overall-accuracy-is-lower-with-asian-individuals) 查看更多细节。
## 把本项目部署在云服务器上 (Heroku, AWS等)
本项目是基于C++库`dlib`的所以把本项目部署在Heroku或者AWS的云端服务器上是很明智的。
为了简化这个过程有一个Dockerfile案例教你怎么把`face_recognition`开发的app封装成[Docker](https://www.docker.com/) 容器文件你可以把它部署在所以支持Docker镜像文件的云服务上。
## 出了幺蛾子?
如果出了问题请在Github提交Issue之前查看 [常见错误](https://github.com/ageitgey/face_recognition/wiki/Common-Errors) 。
## 鸣谢
- 非常感谢 [Davis King](https://github.com/davisking) ([@nulhom](https://twitter.com/nulhom))创建了`dlib`库,提供了响应的人脸关键点检测和人脸编码相关的模型,你可以查看 [blog post](http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html)这个网页获取更多有关ResNet的信息。
- 感谢每一个相关Python模块包括numpy,scipy,scikit-image,pillow等的贡献者。
- 感谢 [Cookiecutter](https://github.com/audreyr/cookiecutter) 和[audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) 项目模板使得Python的打包方式更容易接受。

View File

@ -0,0 +1,16 @@
version: '2.3'
services:
face_recognition:
image: face_recognition
container_name: face_recognition
working_dir: /face_recognition/examples
build:
context: .
#Uncomment this line to run the example on the GPU (requires Nvidia-Docker)
# dockerfile: Dockerfile.gpu
command: python3 -u find_faces_in_picture_cnn.py
volumes:
- ./:/face_recognition
#Uncomment this line to run the example on the GPU (requires Nvidia-Docker)
# runtime: nvidia

View File

@ -0,0 +1,17 @@
FROM animcogn/face_recognition:cpu
# The rest of this file just runs an example script.
# If you wanted to use this Dockerfile to run your own app instead, maybe you would do this:
# COPY . /root/your_app_or_whatever
# RUN cd /root/your_app_or_whatever && \
# pip3 install -r requirements.txt
# RUN whatever_command_you_run_to_start_your_app
COPY . /root/face_recognition
RUN cd /root/face_recognition && \
pip3 install -r requirements.txt && \
python3 setup.py install
CMD cd /root/face_recognition/examples && \
python3 recognize_faces_in_pictures.py

View File

@ -0,0 +1,56 @@
# Docker Builds
If you've never used Docker before, check out the [getting started guide.](https://docs.docker.com/get-started/)
Up-to-date prebuilt images can be found [on Docker hub.](https://hub.docker.com/repository/docker/animcogn/face_recognition)
## CPU Images
- [`cpu-latest`, `cpu`, `cpu-0.1`, `latest`](cpu/Dockerfile)
- [`cpu-jupyter-kubeflow-latest`, `cpu-jupyter-kubeflow`, `cpu-jupyter-kubeflow-0.1`](cpu-jupyter-kubeflow/Dockerfile)
### GPU Images
- [`gpu-latest`, `gpu`, `gpu-0.1`](gpu/Dockerfile)
- [`gpu-jupyter-kubeflow-latest`, `gpu-jupyter-kubeflow`, `gpu-jupyter-kubeflow-0.1`](gpu-jupyter-kubeflow/Dockerfile)
The CPU images should run out of the box without any driver prerequisites.
## GPU Images
### Prerequisites
To use the GPU images, you need to have:
- [The Nvidia drivers](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#nvidia-drivers)
- [The Nvidia-docker container runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#setting-up-nvidia-container-toolkit)
- [Docker configured to use the Nvidia container runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html#daemon-configuration-file)
Once you have those installed, you should be ready to start running the GPU instances.
### Testing GPUs
To make sure your GPU instance is setup correctly, run the following in a container:
```python3
import dlib
print(dlib.cuda.get_num_devices())
```
## Jupyter Images
The Jupyter images are built to be deployed on [Kubeflow](https://www.kubeflow.org/). However, if you just want to run a normal Jupyter instance, they're a great template to build your own.
## Example Dockerfile
Here's an example Dockerfile using the prebuilt images:
```Dockerfile
FROM animcogn/face_recognition:gpu
COPY requirements.txt requirements.txt
RUN pip3 install -r ./requirements.txt
COPY my_app /my_app
CMD [ "python3", "/my_app/my_app.py" ]
```

View File

@ -0,0 +1,15 @@
FROM animcogn/face_recognition:cpu
RUN useradd -ms /bin/bash jovyan && \
chown -R jovyan:jovyan /opt/venv && \
echo 'PATH="/opt/venv/bin:$PATH"' >> /home/jovyan/.bashrc
USER jovyan
ENV PATH="/opt/venv/bin:$PATH"
RUN pip3 install jupyterlab
ENV NB_PREFIX /
CMD ["sh", "-c", "jupyter lab --notebook-dir=/home/jovyan --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"]

View File

@ -0,0 +1,74 @@
# Builder Image
FROM python:3.8-slim-buster AS compile
# Install Dependencies
RUN apt-get -y update && apt-get install -y --fix-missing \
build-essential \
cmake \
gfortran \
git \
wget \
curl \
graphicsmagick \
libgraphicsmagick1-dev \
libatlas-base-dev \
libavcodec-dev \
libavformat-dev \
libgtk2.0-dev \
libjpeg-dev \
liblapack-dev \
libswscale-dev \
pkg-config \
python3-dev \
python3-numpy \
software-properties-common \
zip \
&& apt-get clean && rm -rf /tmp/* /var/tmp/*
# Virtual Environment
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Install Dlib
ENV CFLAGS=-static
RUN pip3 install --upgrade pip && \
git clone -b 'v19.21' --single-branch https://github.com/davisking/dlib.git && \
cd dlib/ && \
python3 setup.py install --set BUILD_SHARED_LIBS=OFF
RUN pip3 install face_recognition
# Runtime Image
FROM python:3.8-slim-buster
COPY --from=compile /opt/venv /opt/venv
COPY --from=compile \
# Sources
/lib/x86_64-linux-gnu/libpthread.so.0 \
/lib/x86_64-linux-gnu/libz.so.1 \
/lib/x86_64-linux-gnu/libm.so.6 \
/lib/x86_64-linux-gnu/libgcc_s.so.1 \
/lib/x86_64-linux-gnu/libc.so.6 \
/lib/x86_64-linux-gnu/libdl.so.2 \
/lib/x86_64-linux-gnu/librt.so.1 \
# Destination
/lib/x86_64-linux-gnu/
COPY --from=compile \
# Sources
/usr/lib/x86_64-linux-gnu/libX11.so.6 \
/usr/lib/x86_64-linux-gnu/libXext.so.6 \
/usr/lib/x86_64-linux-gnu/libpng16.so.16 \
/usr/lib/x86_64-linux-gnu/libjpeg.so.62 \
/usr/lib/x86_64-linux-gnu/libstdc++.so.6 \
/usr/lib/x86_64-linux-gnu/libxcb.so.1 \
/usr/lib/x86_64-linux-gnu/libXau.so.6 \
/usr/lib/x86_64-linux-gnu/libXdmcp.so.6 \
/usr/lib/x86_64-linux-gnu/libbsd.so.0 \
# Destination
/usr/lib/x86_64-linux-gnu/
# Add our packages
ENV PATH="/opt/venv/bin:$PATH"

View File

@ -0,0 +1,15 @@
FROM animcogn/face_recognition:gpu
RUN useradd -ms /bin/bash jovyan && \
chown -R jovyan:jovyan /opt/venv && \
echo 'PATH="/opt/venv/bin:$PATH"' >> /home/jovyan/.bashrc
USER jovyan
ENV PATH="/opt/venv/bin:$PATH"
RUN pip3 install jupyterlab
ENV NB_PREFIX /
CMD ["sh", "-c", "jupyter lab --notebook-dir=/home/jovyan --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"]

View File

@ -0,0 +1,97 @@
FROM nvidia/cuda:11.2.0-cudnn8-devel AS compile
# Install dependencies
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y && apt-get install -y \
git \
cmake \
libsm6 \
libxext6 \
libxrender-dev \
python3 \
python3-pip \
python3-venv \
python3-dev \
python3-numpy \
gcc \
build-essential \
gfortran \
wget \
curl \
graphicsmagick \
libgraphicsmagick1-dev \
libatlas-base-dev \
libavcodec-dev \
libavformat-dev \
libgtk2.0-dev \
libjpeg-dev \
liblapack-dev \
libswscale-dev \
pkg-config \
software-properties-common \
zip \
&& apt-get clean && rm -rf /tmp/* /var/tmp/*
# Virtual Environment
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Scikit learn
RUN pip3 install --upgrade pip && \
pip3 install scikit-build
# Install dlib
ENV CFLAGS=-static
RUN git clone -b 'v19.21' --single-branch https://github.com/davisking/dlib.git dlib/ && \
mkdir -p /dlib/build && \
cmake -H/dlib -B/dlib/build -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1 && \
cmake --build /dlib/build && \
cd /dlib && \
python3 /dlib/setup.py install --set BUILD_SHARED_LIBS=OFF
# Install face recognition
RUN pip3 install face_recognition
# Runtime Image
FROM nvidia/cuda:11.2.0-cudnn8-runtime
# Install requirements
RUN apt-get update && apt-get install -y \
python3 \
python3-distutils
# Copy in libs
COPY --from=compile /opt/venv /opt/venv
COPY --from=compile \
# Sources
/lib/x86_64-linux-gnu/libpthread.so.0 \
/lib/x86_64-linux-gnu/libdl.so.2 \
/lib/x86_64-linux-gnu/librt.so.1 \
/lib/x86_64-linux-gnu/libX11.so.6 \
/lib/x86_64-linux-gnu/libpng16.so.16 \
/lib/x86_64-linux-gnu/libjpeg.so.8 \
/lib/x86_64-linux-gnu/libcudnn.so.8 \
/lib/x86_64-linux-gnu/libstdc++.so.6 \
/lib/x86_64-linux-gnu/libm.so.6 \
/lib/x86_64-linux-gnu/libgcc_s.so.1 \
/lib/x86_64-linux-gnu/libc.so.6 \
/lib/x86_64-linux-gnu/libxcb.so.1 \
/lib/x86_64-linux-gnu/libz.so.1 \
/lib/x86_64-linux-gnu/libXau.so.6 \
/lib/x86_64-linux-gnu/libXdmcp.so.6 \
/lib/x86_64-linux-gnu/libbsd.so.0 \
# Destination
/lib/x86_64-linux-gnu/
COPY --from=compile \
# Sources
/usr/local/cuda/lib64/libcublas.so.11 \
/usr/local/cuda/lib64/libcurand.so.10 \
/usr/local/cuda/lib64/libcusolver.so.11 \
/usr/local/cuda/lib64/libcublasLt.so.11 \
# Destination
/usr/local/cuda/lib64/
# Add our packages
ENV PATH="/opt/venv/bin:$PATH"

View File

@ -0,0 +1,177 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/face_recognition.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/face_recognition.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/face_recognition"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/face_recognition"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -0,0 +1 @@
.. include:: ../AUTHORS.rst

View File

@ -0,0 +1,284 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# face_recognition documentation build configuration file, created by
# sphinx-quickstart on Tue Jul 9 22:26:36 2013.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
from unittest.mock import MagicMock
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return MagicMock()
MOCK_MODULES = ['face_recognition_models', 'Click', 'dlib', 'numpy', 'PIL']
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
# If extensions (or modules to document with autodoc) are in another
# directory, add these directories to sys.path here. If the directory is
# relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# Get the project root dir, which is the parent dir of this
cwd = os.getcwd()
project_root = os.path.dirname(cwd)
# Insert the project root dir as the first element in the PYTHONPATH.
# This lets us ensure that the source package is imported, and that its
# version is used.
sys.path.insert(0, project_root)
import face_recognition
# -- General configuration ---------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Face Recognition'
copyright = u"2017, Adam Geitgey"
# The version info for the project you're documenting, acts as replacement
# for |version| and |release|, also used in various other places throughout
# the built documents.
#
# The short X.Y version.
version = face_recognition.__version__
# The full version, including alpha/beta/rc tags.
release = face_recognition.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to
# some non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built
# documents.
#keep_warnings = False
# -- Options for HTML output -------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a
# theme further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as
# html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the
# top of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon
# of the docs. This file should be a Windows icon file (.ico) being
# 16x16 or 32x32 pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets)
# here, relative to this directory. They are copied after the builtin
# static files, so a file named "default.css" will overwrite the builtin
# "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names
# to template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer.
# Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer.
# Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages
# will contain a <link> tag referring to it. The value of this option
# must be the base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'face_recognitiondoc'
# -- Options for LaTeX output ------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'face_recognition.tex',
u'Face Recognition Documentation',
u'Adam Geitgey', 'manual'),
]
# The name of an image file (relative to this directory) to place at
# the top of the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings
# are parts, not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'face_recognition',
u'Face Recognition Documentation',
[u'Adam Geitgey'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ----------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'face_recognition',
u'Face Recognition Documentation',
u'Adam Geitgey',
'face_recognition',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False

View File

@ -0,0 +1 @@
.. include:: ../CONTRIBUTING.rst

View File

@ -0,0 +1,10 @@
face_recognition package
========================
Module contents
---------------
.. automodule:: face_recognition.api
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1 @@
.. include:: ../HISTORY.rst

View File

@ -0,0 +1,22 @@
Welcome to Face Recognition's documentation!
======================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
modules
contributing
authors
history
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,51 @@
.. highlight:: shell
============
Installation
============
Stable release
--------------
To install Face Recognition, run this command in your terminal:
.. code-block:: console
$ pip3 install face_recognition
This is the preferred method to install Face Recognition, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for Face Recognition can be downloaded from the `Github repo`_.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/ageitgey/face_recognition
Or download the `tarball`_:
.. code-block:: console
$ curl -OL https://github.com/ageitgey/face_recognition/tarball/master
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Github repo: https://github.com/ageitgey/face_recognition
.. _tarball: https://github.com/ageitgey/face_recognition/tarball/master

View File

@ -0,0 +1,242 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\face_recognition.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\face_recognition.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View File

@ -0,0 +1,7 @@
face_recognition
================
.. toctree::
:maxdepth: 4
face_recognition

View File

@ -0,0 +1 @@
.. include:: ../README.rst

View File

@ -0,0 +1,41 @@
=====
Usage
=====
To use Face Recognition in a project::
import face_recognition
See the examples in the /examples folder on github for how to use each function.
You can also check the API docs for the 'face_recognition' module to see the possible parameters for each function.
The basic idea is that first you load an image::
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
That loads the image into a numpy array. If you already have an image in a numpy array, you can skip this step.
Then you can perform operations on the image, like finding faces, identifying facial features or finding face encodings::
# Find all the faces in the image
face_locations = face_recognition.face_locations(image)
# Or maybe find the facial features in the image
face_landmarks_list = face_recognition.face_landmarks(image)
# Or you could get face encodings for each face in the image:
list_of_face_encodings = face_recognition.face_encodings(image)
Face encodings can be compared against each other to see if the faces are a match. Note: Finding the encoding for a face
is a bit slow, so you might want to save the results for each image in a database or cache if you need to refer back to
it later.
But once you have the encodings for faces, you can compare them like this::
# results is an array of True/False telling if the unknown face matched anyone in the known_faces array
results = face_recognition.compare_faces(known_face_encodings, a_single_unknown_face_encoding)
It's that simple! Check out the examples for more details.

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

View File

@ -0,0 +1,77 @@
import timeit
# Note: This example is only tested with Python 3 (not Python 2)
# This is a very simple benchmark to give you an idea of how fast each step of face recognition will run on your system.
# Notice that face detection gets very slow at large image sizes. So you might consider running face detection on a
# scaled down version of your image and then running face encodings on the the full size image.
TEST_IMAGES = [
"obama-240p.jpg",
"obama-480p.jpg",
"obama-720p.jpg",
"obama-1080p.jpg"
]
def run_test(setup, test, iterations_per_test=5, tests_to_run=10):
fastest_execution = min(timeit.Timer(test, setup=setup).repeat(tests_to_run, iterations_per_test))
execution_time = fastest_execution / iterations_per_test
fps = 1.0 / execution_time
return execution_time, fps
setup_locate_faces = """
import face_recognition
image = face_recognition.load_image_file("{}")
"""
test_locate_faces = """
face_locations = face_recognition.face_locations(image)
"""
setup_face_landmarks = """
import face_recognition
image = face_recognition.load_image_file("{}")
face_locations = face_recognition.face_locations(image)
"""
test_face_landmarks = """
landmarks = face_recognition.face_landmarks(image, face_locations=face_locations)[0]
"""
setup_encode_face = """
import face_recognition
image = face_recognition.load_image_file("{}")
face_locations = face_recognition.face_locations(image)
"""
test_encode_face = """
encoding = face_recognition.face_encodings(image, known_face_locations=face_locations)[0]
"""
setup_end_to_end = """
import face_recognition
image = face_recognition.load_image_file("{}")
"""
test_end_to_end = """
encoding = face_recognition.face_encodings(image)[0]
"""
print("Benchmarks (Note: All benchmarks are only using a single CPU core)")
print()
for image in TEST_IMAGES:
size = image.split("-")[1].split(".")[0]
print("Timings at {}:".format(size))
print(" - Face locations: {:.4f}s ({:.2f} fps)".format(*run_test(setup_locate_faces.format(image), test_locate_faces)))
print(" - Face landmarks: {:.4f}s ({:.2f} fps)".format(*run_test(setup_face_landmarks.format(image), test_face_landmarks)))
print(" - Encode face (inc. landmarks): {:.4f}s ({:.2f} fps)".format(*run_test(setup_encode_face.format(image), test_encode_face)))
print(" - End-to-end: {:.4f}s ({:.2f} fps)".format(*run_test(setup_end_to_end.format(image), test_end_to_end)))
print()

Binary file not shown.

After

Width:  |  Height:  |  Size: 345 KiB

View File

@ -0,0 +1,105 @@
#!/usr/bin/env python3
# This is a demo of detecting eye status from the users camera. If the users eyes are closed for EYES_CLOSED seconds, the system will start printing out "EYES CLOSED"
# to the terminal until the user presses and holds the spacebar to acknowledge
# this demo must be run with sudo privileges for the keyboard module to work
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
# imports
import face_recognition
import cv2
import time
from scipy.spatial import distance as dist
EYES_CLOSED_SECONDS = 5
def main():
closed_count = 0
video_capture = cv2.VideoCapture(0)
ret, frame = video_capture.read(0)
# cv2.VideoCapture.release()
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
rgb_small_frame = small_frame[:, :, ::-1]
face_landmarks_list = face_recognition.face_landmarks(rgb_small_frame)
process = True
while True:
ret, frame = video_capture.read(0)
# get it into the correct format
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
rgb_small_frame = small_frame[:, :, ::-1]
# get the correct face landmarks
if process:
face_landmarks_list = face_recognition.face_landmarks(rgb_small_frame)
# get eyes
for face_landmark in face_landmarks_list:
left_eye = face_landmark['left_eye']
right_eye = face_landmark['right_eye']
color = (255,0,0)
thickness = 2
cv2.rectangle(small_frame, left_eye[0], right_eye[-1], color, thickness)
cv2.imshow('Video', small_frame)
ear_left = get_ear(left_eye)
ear_right = get_ear(right_eye)
closed = ear_left < 0.2 and ear_right < 0.2
if (closed):
closed_count += 1
else:
closed_count = 0
if (closed_count >= EYES_CLOSED_SECONDS):
asleep = True
while (asleep): #continue this loop until they wake up and acknowledge music
print("EYES CLOSED")
if cv2.waitKey(1) == 32: #Wait for space key
asleep = False
print("EYES OPENED")
closed_count = 0
process = not process
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
def get_ear(eye):
# compute the euclidean distances between the two sets of
# vertical eye landmarks (x, y)-coordinates
A = dist.euclidean(eye[1], eye[5])
B = dist.euclidean(eye[2], eye[4])
# compute the euclidean distance between the horizontal
# eye landmark (x, y)-coordinates
C = dist.euclidean(eye[0], eye[3])
# compute the eye aspect ratio
ear = (A + B) / (2.0 * C)
# return the eye aspect ratio
return ear
if __name__ == "__main__":
main()

View File

@ -0,0 +1,52 @@
import face_recognition
import cv2
# This is a demo of blurring faces in video.
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)
# Initialize some variables
face_locations = []
while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# Resize frame of video to 1/4 size for faster face detection processing
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(small_frame, model="cnn")
# Display the results
for top, right, bottom, left in face_locations:
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
top *= 4
right *= 4
bottom *= 4
left *= 4
# Extract the region of the image that contains the face
face_image = frame[top:bottom, left:right]
# Blur the face image
face_image = cv2.GaussianBlur(face_image, (99, 99), 30)
# Put the blurred face region back into the frame image
frame[top:bottom, left:right] = face_image
# Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

View File

@ -0,0 +1,34 @@
from PIL import Image, ImageDraw
import face_recognition
# Load the jpg file into a numpy array
image = face_recognition.load_image_file("biden.jpg")
# Find all facial features in all the faces in the image
face_landmarks_list = face_recognition.face_landmarks(image)
pil_image = Image.fromarray(image)
for face_landmarks in face_landmarks_list:
d = ImageDraw.Draw(pil_image, 'RGBA')
# Make the eyebrows into a nightmare
d.polygon(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 128))
d.polygon(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 128))
d.line(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 150), width=5)
d.line(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 150), width=5)
# Gloss the lips
d.polygon(face_landmarks['top_lip'], fill=(150, 0, 0, 128))
d.polygon(face_landmarks['bottom_lip'], fill=(150, 0, 0, 128))
d.line(face_landmarks['top_lip'], fill=(150, 0, 0, 64), width=8)
d.line(face_landmarks['bottom_lip'], fill=(150, 0, 0, 64), width=8)
# Sparkle the eyes
d.polygon(face_landmarks['left_eye'], fill=(255, 255, 255, 30))
d.polygon(face_landmarks['right_eye'], fill=(255, 255, 255, 30))
# Apply some eyeliner
d.line(face_landmarks['left_eye'] + [face_landmarks['left_eye'][0]], fill=(0, 0, 0, 110), width=6)
d.line(face_landmarks['right_eye'] + [face_landmarks['right_eye'][0]], fill=(0, 0, 0, 110), width=6)
pil_image.show()

View File

@ -0,0 +1,37 @@
import face_recognition
# Often instead of just checking if two faces match or not (True or False), it's helpful to see how similar they are.
# You can do that by using the face_distance function.
# The model was trained in a way that faces with a distance of 0.6 or less should be a match. But if you want to
# be more strict, you can look for a smaller face distance. For example, using a 0.55 cutoff would reduce false
# positive matches at the risk of more false negatives.
# Note: This isn't exactly the same as a "percent match". The scale isn't linear. But you can assume that images with a
# smaller distance are more similar to each other than ones with a larger distance.
# Load some images to compare against
known_obama_image = face_recognition.load_image_file("obama.jpg")
known_biden_image = face_recognition.load_image_file("biden.jpg")
# Get the face encodings for the known images
obama_face_encoding = face_recognition.face_encodings(known_obama_image)[0]
biden_face_encoding = face_recognition.face_encodings(known_biden_image)[0]
known_encodings = [
obama_face_encoding,
biden_face_encoding
]
# Load a test image and get encondings for it
image_to_test = face_recognition.load_image_file("obama2.jpg")
image_to_test_encoding = face_recognition.face_encodings(image_to_test)[0]
# See how far apart the test image is from the known faces
face_distances = face_recognition.face_distance(known_encodings, image_to_test_encoding)
for i, face_distance in enumerate(face_distances):
print("The test image has a distance of {:.2} from known image #{}".format(face_distance, i))
print("- With a normal cutoff of 0.6, would the test image match the known image? {}".format(face_distance < 0.6))
print("- With a very strict cutoff of 0.5, would the test image match the known image? {}".format(face_distance < 0.5))
print()

View File

@ -0,0 +1,206 @@
"""
This is an example of using the k-nearest-neighbors (KNN) algorithm for face recognition.
When should I use this example?
This example is useful when you wish to recognize a large set of known people,
and make a prediction for an unknown person in a feasible computation time.
Algorithm Description:
The knn classifier is first trained on a set of labeled (known) faces and can then predict the person
in an unknown image by finding the k most similar faces (images with closet face-features under euclidean distance)
in its training set, and performing a majority vote (possibly weighted) on their label.
For example, if k=3, and the three closest face images to the given image in the training set are one image of Biden
and two images of Obama, The result would be 'Obama'.
* This implementation uses a weighted vote, such that the votes of closer-neighbors are weighted more heavily.
Usage:
1. Prepare a set of images of the known people you want to recognize. Organize the images in a single directory
with a sub-directory for each known person.
2. Then, call the 'train' function with the appropriate parameters. Make sure to pass in the 'model_save_path' if you
want to save the model to disk so you can re-use the model without having to re-train it.
3. Call 'predict' and pass in your trained model to recognize the people in an unknown image.
NOTE: This example requires scikit-learn to be installed! You can install it with pip:
$ pip3 install scikit-learn
"""
import math
from sklearn import neighbors
import os
import os.path
import pickle
from PIL import Image, ImageDraw
import face_recognition
from face_recognition.face_recognition_cli import image_files_in_folder
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg'}
def train(train_dir, model_save_path=None, n_neighbors=None, knn_algo='ball_tree', verbose=False):
"""
Trains a k-nearest neighbors classifier for face recognition.
:param train_dir: directory that contains a sub-directory for each known person, with its name.
(View in source code to see train_dir example tree structure)
Structure:
<train_dir>/
<person1>/
<somename1>.jpeg
<somename2>.jpeg
...
<person2>/
<somename1>.jpeg
<somename2>.jpeg
...
:param model_save_path: (optional) path to save model on disk
:param n_neighbors: (optional) number of neighbors to weigh in classification. Chosen automatically if not specified
:param knn_algo: (optional) underlying data structure to support knn.default is ball_tree
:param verbose: verbosity of training
:return: returns knn classifier that was trained on the given data.
"""
X = []
y = []
# Loop through each person in the training set
for class_dir in os.listdir(train_dir):
if not os.path.isdir(os.path.join(train_dir, class_dir)):
continue
# Loop through each training image for the current person
for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)):
image = face_recognition.load_image_file(img_path)
face_bounding_boxes = face_recognition.face_locations(image)
if len(face_bounding_boxes) != 1:
# If there are no people (or too many people) in a training image, skip the image.
if verbose:
print("Image {} not suitable for training: {}".format(img_path, "Didn't find a face" if len(face_bounding_boxes) < 1 else "Found more than one face"))
else:
# Add face encoding for current image to the training set
X.append(face_recognition.face_encodings(image, known_face_locations=face_bounding_boxes)[0])
y.append(class_dir)
# Determine how many neighbors to use for weighting in the KNN classifier
if n_neighbors is None:
n_neighbors = int(round(math.sqrt(len(X))))
if verbose:
print("Chose n_neighbors automatically:", n_neighbors)
# Create and train the KNN classifier
knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm=knn_algo, weights='distance')
knn_clf.fit(X, y)
# Save the trained KNN classifier
if model_save_path is not None:
with open(model_save_path, 'wb') as f:
pickle.dump(knn_clf, f)
return knn_clf
def predict(X_img_path, knn_clf=None, model_path=None, distance_threshold=0.6):
"""
Recognizes faces in given image using a trained KNN classifier
:param X_img_path: path to image to be recognized
:param knn_clf: (optional) a knn classifier object. if not specified, model_save_path must be specified.
:param model_path: (optional) path to a pickled knn classifier. if not specified, model_save_path must be knn_clf.
:param distance_threshold: (optional) distance threshold for face classification. the larger it is, the more chance
of mis-classifying an unknown person as a known one.
:return: a list of names and face locations for the recognized faces in the image: [(name, bounding box), ...].
For faces of unrecognized persons, the name 'unknown' will be returned.
"""
if not os.path.isfile(X_img_path) or os.path.splitext(X_img_path)[1][1:] not in ALLOWED_EXTENSIONS:
raise Exception("Invalid image path: {}".format(X_img_path))
if knn_clf is None and model_path is None:
raise Exception("Must supply knn classifier either thourgh knn_clf or model_path")
# Load a trained KNN model (if one was passed in)
if knn_clf is None:
with open(model_path, 'rb') as f:
knn_clf = pickle.load(f)
# Load image file and find face locations
X_img = face_recognition.load_image_file(X_img_path)
X_face_locations = face_recognition.face_locations(X_img)
# If no faces are found in the image, return an empty result.
if len(X_face_locations) == 0:
return []
# Find encodings for faces in the test iamge
faces_encodings = face_recognition.face_encodings(X_img, known_face_locations=X_face_locations)
# Use the KNN model to find the best matches for the test face
closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=1)
are_matches = [closest_distances[0][i][0] <= distance_threshold for i in range(len(X_face_locations))]
# Predict classes and remove classifications that aren't within the threshold
return [(pred, loc) if rec else ("unknown", loc) for pred, loc, rec in zip(knn_clf.predict(faces_encodings), X_face_locations, are_matches)]
def show_prediction_labels_on_image(img_path, predictions):
"""
Shows the face recognition results visually.
:param img_path: path to image to be recognized
:param predictions: results of the predict function
:return:
"""
pil_image = Image.open(img_path).convert("RGB")
draw = ImageDraw.Draw(pil_image)
for name, (top, right, bottom, left) in predictions:
# Draw a box around the face using the Pillow module
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255))
# There's a bug in Pillow where it blows up with non-UTF-8 text
# when using the default bitmap font
name = name.encode("UTF-8")
# Draw a label with a name below the face
text_width, text_height = draw.textsize(name)
draw.rectangle(((left, bottom - text_height - 10), (right, bottom)), fill=(0, 0, 255), outline=(0, 0, 255))
draw.text((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255))
# Remove the drawing library from memory as per the Pillow docs
del draw
# Display the resulting image
pil_image.show()
if __name__ == "__main__":
# STEP 1: Train the KNN classifier and save it to disk
# Once the model is trained and saved, you can skip this step next time.
print("Training KNN classifier...")
classifier = train("knn_examples/train", model_save_path="trained_knn_model.clf", n_neighbors=2)
print("Training complete!")
# STEP 2: Using the trained classifier, make predictions for unknown images
for image_file in os.listdir("knn_examples/test"):
full_file_path = os.path.join("knn_examples/test", image_file)
print("Looking for faces in {}".format(image_file))
# Find all people in the image using a trained classifier model
# Note: You can pass in either a classifier file name or a classifier model instance
predictions = predict(full_file_path, model_path="trained_knn_model.clf")
# Print results on the console
for name, (top, right, bottom, left) in predictions:
print("- Found {} at ({}, {})".format(name, left, top))
# Display results overlaid on an image
show_prediction_labels_on_image(os.path.join("knn_examples/test", image_file), predictions)

View File

@ -0,0 +1,79 @@
# Train multiple images per person
# Find and recognize faces in an image using a SVC with scikit-learn
"""
Structure:
<test_image>.jpg
<train_dir>/
<person_1>/
<person_1_face-1>.jpg
<person_1_face-2>.jpg
.
.
<person_1_face-n>.jpg
<person_2>/
<person_2_face-1>.jpg
<person_2_face-2>.jpg
.
.
<person_2_face-n>.jpg
.
.
<person_n>/
<person_n_face-1>.jpg
<person_n_face-2>.jpg
.
.
<person_n_face-n>.jpg
"""
import face_recognition
from sklearn import svm
import os
# Training the SVC classifier
# The training data would be all the face encodings from all the known images and the labels are their names
encodings = []
names = []
# Training directory
train_dir = os.listdir('/train_dir/')
# Loop through each person in the training directory
for person in train_dir:
pix = os.listdir("/train_dir/" + person)
# Loop through each training image for the current person
for person_img in pix:
# Get the face encodings for the face in each image file
face = face_recognition.load_image_file("/train_dir/" + person + "/" + person_img)
face_bounding_boxes = face_recognition.face_locations(face)
#If training image contains exactly one face
if len(face_bounding_boxes) == 1:
face_enc = face_recognition.face_encodings(face)[0]
# Add face encoding for current image with corresponding label (name) to the training data
encodings.append(face_enc)
names.append(person)
else:
print(person + "/" + person_img + " was skipped and can't be used for training")
# Create and train the SVC classifier
clf = svm.SVC(gamma='scale')
clf.fit(encodings,names)
# Load the test image with unknown faces into a numpy array
test_image = face_recognition.load_image_file('test_image.jpg')
# Find all the faces in the test image using the default HOG-based model
face_locations = face_recognition.face_locations(test_image)
no = len(face_locations)
print("Number of faces detected: ", no)
# Predict all the faces in the test image using the trained classifier
print("Found:")
for i in range(no):
test_image_enc = face_recognition.face_encodings(test_image)[i]
name = clf.predict([test_image_enc])
print(*name)

View File

@ -0,0 +1,86 @@
import face_recognition
import cv2
# This is a demo of running face recognition on a video file and saving the results to a new video file.
#
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
# Open the input movie file
input_movie = cv2.VideoCapture("hamilton_clip.mp4")
length = int(input_movie.get(cv2.CAP_PROP_FRAME_COUNT))
# Create an output movie file (make sure resolution/frame rate matches input video!)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
output_movie = cv2.VideoWriter('output.avi', fourcc, 29.97, (640, 360))
# Load some sample pictures and learn how to recognize them.
lmm_image = face_recognition.load_image_file("lin-manuel-miranda.png")
lmm_face_encoding = face_recognition.face_encodings(lmm_image)[0]
al_image = face_recognition.load_image_file("alex-lacamoire.png")
al_face_encoding = face_recognition.face_encodings(al_image)[0]
known_faces = [
lmm_face_encoding,
al_face_encoding
]
# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
frame_number = 0
while True:
# Grab a single frame of video
ret, frame = input_movie.read()
frame_number += 1
# Quit when the input video file ends
if not ret:
break
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame[:, :, ::-1]
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
match = face_recognition.compare_faces(known_faces, face_encoding, tolerance=0.50)
# If you had more than 2 faces, you could make this logic a lot prettier
# but I kept it simple for the demo
name = None
if match[0]:
name = "Lin-Manuel Miranda"
elif match[1]:
name = "Alex Lacamoire"
face_names.append(name)
# Label the results
for (top, right, bottom, left), name in zip(face_locations, face_names):
if not name:
continue
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 0.5, (255, 255, 255), 1)
# Write the resulting image to the output video file
print("Writing frame {} / {}".format(frame_number, length))
output_movie.write(frame)
# All done!
input_movie.release()
cv2.destroyAllWindows()

View File

@ -0,0 +1,79 @@
import face_recognition
import cv2
import numpy as np
# This is a super simple (but slow) example of running face recognition on live video from your webcam.
# There's a second example that's a little more complicated but runs faster.
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)
# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
obama_face_encoding,
biden_face_encoding
]
known_face_names = [
"Barack Obama",
"Joe Biden"
]
while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame[:, :, ::-1]
# Find all the faces and face enqcodings in the frame of video
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
# Loop through each face in this frame of video
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# If a match was found in known_face_encodings, just use the first one.
# if True in matches:
# first_match_index = matches.index(True)
# name = known_face_names[first_match_index]
# Or instead, use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
# Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

View File

@ -0,0 +1,104 @@
import face_recognition
import cv2
import numpy as np
# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
# other example, but it includes some basic performance tweaks to make things run a lot faster:
# 1. Process each video frame at 1/4 resolution (though still display it at full resolution)
# 2. Only detect faces in every other frame of video.
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)
# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
obama_face_encoding,
biden_face_encoding
]
known_face_names = [
"Barack Obama",
"Joe Biden"
]
# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# Only process every other frame of video to save time
if process_this_frame:
# Resize frame of video to 1/4 size for faster face recognition processing
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_small_frame = small_frame[:, :, ::-1]
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# # If a match was found in known_face_encodings, just use the first one.
# if True in matches:
# first_match_index = matches.index(True)
# name = known_face_names[first_match_index]
# Or instead, use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
face_names.append(name)
process_this_frame = not process_this_frame
# Display the results
for (top, right, bottom, left), name in zip(face_locations, face_names):
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
top *= 4
right *= 4
bottom *= 4
left *= 4
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
# Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

View File

@ -0,0 +1,212 @@
import face_recognition
import cv2
from multiprocessing import Process, Manager, cpu_count, set_start_method
import time
import numpy
import threading
import platform
# This is a little bit complicated (but fast) example of running face recognition on live video from your webcam.
# This example is using multiprocess.
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
# Get next worker's id
def next_id(current_id, worker_num):
if current_id == worker_num:
return 1
else:
return current_id + 1
# Get previous worker's id
def prev_id(current_id, worker_num):
if current_id == 1:
return worker_num
else:
return current_id - 1
# A subprocess use to capture frames.
def capture(read_frame_list, Global, worker_num):
# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)
# video_capture.set(3, 640) # Width of the frames in the video stream.
# video_capture.set(4, 480) # Height of the frames in the video stream.
# video_capture.set(5, 30) # Frame rate.
print("Width: %d, Height: %d, FPS: %d" % (video_capture.get(3), video_capture.get(4), video_capture.get(5)))
while not Global.is_exit:
# If it's time to read a frame
if Global.buff_num != next_id(Global.read_num, worker_num):
# Grab a single frame of video
ret, frame = video_capture.read()
read_frame_list[Global.buff_num] = frame
Global.buff_num = next_id(Global.buff_num, worker_num)
else:
time.sleep(0.01)
# Release webcam
video_capture.release()
# Many subprocess use to process frames.
def process(worker_id, read_frame_list, write_frame_list, Global, worker_num):
known_face_encodings = Global.known_face_encodings
known_face_names = Global.known_face_names
while not Global.is_exit:
# Wait to read
while Global.read_num != worker_id or Global.read_num != prev_id(Global.buff_num, worker_num):
# If the user has requested to end the app, then stop waiting for webcam frames
if Global.is_exit:
break
time.sleep(0.01)
# Delay to make the video look smoother
time.sleep(Global.frame_delay)
# Read a single frame from frame list
frame_process = read_frame_list[worker_id]
# Expect next worker to read frame
Global.read_num = next_id(Global.read_num, worker_num)
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame_process[:, :, ::-1]
# Find all the faces and face encodings in the frame of video, cost most time
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
# Loop through each face in this frame of video
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# If a match was found in known_face_encodings, just use the first one.
if True in matches:
first_match_index = matches.index(True)
name = known_face_names[first_match_index]
# Draw a box around the face
cv2.rectangle(frame_process, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame_process, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame_process, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
# Wait to write
while Global.write_num != worker_id:
time.sleep(0.01)
# Send frame to global
write_frame_list[worker_id] = frame_process
# Expect next worker to write frame
Global.write_num = next_id(Global.write_num, worker_num)
if __name__ == '__main__':
# Fix Bug on MacOS
if platform.system() == 'Darwin':
set_start_method('forkserver')
# Global variables
Global = Manager().Namespace()
Global.buff_num = 1
Global.read_num = 1
Global.write_num = 1
Global.frame_delay = 0
Global.is_exit = False
read_frame_list = Manager().dict()
write_frame_list = Manager().dict()
# Number of workers (subprocess use to process frames)
if cpu_count() > 2:
worker_num = cpu_count() - 1 # 1 for capturing frames
else:
worker_num = 2
# Subprocess list
p = []
# Create a thread to capture frames (if uses subprocess, it will crash on Mac)
p.append(threading.Thread(target=capture, args=(read_frame_list, Global, worker_num,)))
p[0].start()
# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
# Create arrays of known face encodings and their names
Global.known_face_encodings = [
obama_face_encoding,
biden_face_encoding
]
Global.known_face_names = [
"Barack Obama",
"Joe Biden"
]
# Create workers
for worker_id in range(1, worker_num + 1):
p.append(Process(target=process, args=(worker_id, read_frame_list, write_frame_list, Global, worker_num,)))
p[worker_id].start()
# Start to show video
last_num = 1
fps_list = []
tmp_time = time.time()
while not Global.is_exit:
while Global.write_num != last_num:
last_num = int(Global.write_num)
# Calculate fps
delay = time.time() - tmp_time
tmp_time = time.time()
fps_list.append(delay)
if len(fps_list) > 5 * worker_num:
fps_list.pop(0)
fps = len(fps_list) / numpy.sum(fps_list)
print("fps: %.2f" % fps)
# Calculate frame delay, in order to make the video look smoother.
# When fps is higher, should use a smaller ratio, or fps will be limited in a lower value.
# Larger ratio can make the video look smoother, but fps will hard to become higher.
# Smaller ratio can make fps higher, but the video looks not too smoother.
# The ratios below are tested many times.
if fps < 6:
Global.frame_delay = (1 / fps) * 0.75
elif fps < 20:
Global.frame_delay = (1 / fps) * 0.5
elif fps < 30:
Global.frame_delay = (1 / fps) * 0.25
else:
Global.frame_delay = 0
# Display the resulting image
cv2.imshow('Video', write_frame_list[prev_id(Global.write_num, worker_num)])
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
Global.is_exit = True
break
time.sleep(0.01)
# Quit
cv2.destroyAllWindows()

View File

@ -0,0 +1,214 @@
"""
This is an example of using the k-nearest-neighbors (KNN) algorithm for face recognition.
When should I use this example?
This example is useful when you wish to recognize a large set of known people,
and make a prediction for an unknown person in a feasible computation time.
Algorithm Description:
The knn classifier is first trained on a set of labeled (known) faces and can then predict the person
in a live stream by finding the k most similar faces (images with closet face-features under euclidean distance)
in its training set, and performing a majority vote (possibly weighted) on their label.
For example, if k=3, and the three closest face images to the given image in the training set are one image of Biden
and two images of Obama, The result would be 'Obama'.
* This implementation uses a weighted vote, such that the votes of closer-neighbors are weighted more heavily.
Usage:
1. Prepare a set of images of the known people you want to recognize. Organize the images in a single directory
with a sub-directory for each known person.
2. Then, call the 'train' function with the appropriate parameters. Make sure to pass in the 'model_save_path' if you
want to save the model to disk so you can re-use the model without having to re-train it.
3. Call 'predict' and pass in your trained model to recognize the people in a live video stream.
NOTE: This example requires scikit-learn, opencv and numpy to be installed! You can install it with pip:
$ pip3 install scikit-learn
$ pip3 install numpy
$ pip3 install opencv-contrib-python
"""
import cv2
import math
from sklearn import neighbors
import os
import os.path
import pickle
from PIL import Image, ImageDraw
import face_recognition
from face_recognition.face_recognition_cli import image_files_in_folder
import numpy as np
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'JPG'}
def train(train_dir, model_save_path=None, n_neighbors=None, knn_algo='ball_tree', verbose=False):
"""
Trains a k-nearest neighbors classifier for face recognition.
:param train_dir: directory that contains a sub-directory for each known person, with its name.
(View in source code to see train_dir example tree structure)
Structure:
<train_dir>/
<person1>/
<somename1>.jpeg
<somename2>.jpeg
...
<person2>/
<somename1>.jpeg
<somename2>.jpeg
...
:param model_save_path: (optional) path to save model on disk
:param n_neighbors: (optional) number of neighbors to weigh in classification. Chosen automatically if not specified
:param knn_algo: (optional) underlying data structure to support knn.default is ball_tree
:param verbose: verbosity of training
:return: returns knn classifier that was trained on the given data.
"""
X = []
y = []
# Loop through each person in the training set
for class_dir in os.listdir(train_dir):
if not os.path.isdir(os.path.join(train_dir, class_dir)):
continue
# Loop through each training image for the current person
for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)):
image = face_recognition.load_image_file(img_path)
face_bounding_boxes = face_recognition.face_locations(image)
if len(face_bounding_boxes) != 1:
# If there are no people (or too many people) in a training image, skip the image.
if verbose:
print("Image {} not suitable for training: {}".format(img_path, "Didn't find a face" if len(face_bounding_boxes) < 1 else "Found more than one face"))
else:
# Add face encoding for current image to the training set
X.append(face_recognition.face_encodings(image, known_face_locations=face_bounding_boxes)[0])
y.append(class_dir)
# Determine how many neighbors to use for weighting in the KNN classifier
if n_neighbors is None:
n_neighbors = int(round(math.sqrt(len(X))))
if verbose:
print("Chose n_neighbors automatically:", n_neighbors)
# Create and train the KNN classifier
knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm=knn_algo, weights='distance')
knn_clf.fit(X, y)
# Save the trained KNN classifier
if model_save_path is not None:
with open(model_save_path, 'wb') as f:
pickle.dump(knn_clf, f)
return knn_clf
def predict(X_frame, knn_clf=None, model_path=None, distance_threshold=0.5):
"""
Recognizes faces in given image using a trained KNN classifier
:param X_frame: frame to do the prediction on.
:param knn_clf: (optional) a knn classifier object. if not specified, model_save_path must be specified.
:param model_path: (optional) path to a pickled knn classifier. if not specified, model_save_path must be knn_clf.
:param distance_threshold: (optional) distance threshold for face classification. the larger it is, the more chance
of mis-classifying an unknown person as a known one.
:return: a list of names and face locations for the recognized faces in the image: [(name, bounding box), ...].
For faces of unrecognized persons, the name 'unknown' will be returned.
"""
if knn_clf is None and model_path is None:
raise Exception("Must supply knn classifier either thourgh knn_clf or model_path")
# Load a trained KNN model (if one was passed in)
if knn_clf is None:
with open(model_path, 'rb') as f:
knn_clf = pickle.load(f)
X_face_locations = face_recognition.face_locations(X_frame)
# If no faces are found in the image, return an empty result.
if len(X_face_locations) == 0:
return []
# Find encodings for faces in the test image
faces_encodings = face_recognition.face_encodings(X_frame, known_face_locations=X_face_locations)
# Use the KNN model to find the best matches for the test face
closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=1)
are_matches = [closest_distances[0][i][0] <= distance_threshold for i in range(len(X_face_locations))]
# Predict classes and remove classifications that aren't within the threshold
return [(pred, loc) if rec else ("unknown", loc) for pred, loc, rec in zip(knn_clf.predict(faces_encodings), X_face_locations, are_matches)]
def show_prediction_labels_on_image(frame, predictions):
"""
Shows the face recognition results visually.
:param frame: frame to show the predictions on
:param predictions: results of the predict function
:return opencv suited image to be fitting with cv2.imshow fucntion:
"""
pil_image = Image.fromarray(frame)
draw = ImageDraw.Draw(pil_image)
for name, (top, right, bottom, left) in predictions:
# enlarge the predictions for the full sized image.
top *= 2
right *= 2
bottom *= 2
left *= 2
# Draw a box around the face using the Pillow module
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255))
# There's a bug in Pillow where it blows up with non-UTF-8 text
# when using the default bitmap font
name = name.encode("UTF-8")
# Draw a label with a name below the face
text_width, text_height = draw.textsize(name)
draw.rectangle(((left, bottom - text_height - 10), (right, bottom)), fill=(0, 0, 255), outline=(0, 0, 255))
draw.text((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255))
# Remove the drawing library from memory as per the Pillow docs.
del draw
# Save image in open-cv format to be able to show it.
opencvimage = np.array(pil_image)
return opencvimage
if __name__ == "__main__":
print("Training KNN classifier...")
classifier = train("knn_examples/train", model_save_path="trained_knn_model.clf", n_neighbors=2)
print("Training complete!")
# process one frame in every 30 frames for speed
process_this_frame = 29
print('Setting cameras up...')
# multiple cameras can be used with the format url = 'http://username:password@camera_ip:port'
url = 'http://admin:admin@192.168.0.106:8081/'
cap = cv2.VideoCapture(url)
while 1 > 0:
ret, frame = cap.read()
if ret:
# Different resizing options can be chosen based on desired program runtime.
# Image resizing for more stable streaming
img = cv2.resize(frame, (0, 0), fx=0.5, fy=0.5)
process_this_frame = process_this_frame + 1
if process_this_frame % 30 == 0:
predictions = predict(img, model_path="trained_knn_model.clf")
frame = show_prediction_labels_on_image(frame, predictions)
cv2.imshow('camera', frame)
if ord('q') == cv2.waitKey(10):
cap.release()
cv2.destroyAllWindows()
exit(0)

View File

@ -0,0 +1,48 @@
# This is a demo of running face recognition on a Raspberry Pi.
# This program will print out the names of anyone it recognizes to the console.
# To run this, you need a Raspberry Pi 2 (or greater) with face_recognition and
# the picamera[array] module installed.
# You can follow this installation instructions to get your RPi set up:
# https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65
import face_recognition
import picamera
import numpy as np
# Get a reference to the Raspberry Pi camera.
# If this fails, make sure you have a camera connected to the RPi and that you
# enabled your camera in raspi-config and rebooted first.
camera = picamera.PiCamera()
camera.resolution = (320, 240)
output = np.empty((240, 320, 3), dtype=np.uint8)
# Load a sample picture and learn how to recognize it.
print("Loading known face image(s)")
obama_image = face_recognition.load_image_file("obama_small.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# Initialize some variables
face_locations = []
face_encodings = []
while True:
print("Capturing image.")
# Grab a single frame of video from the RPi camera as a numpy array
camera.capture(output, format="rgb")
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(output)
print("Found {} faces in image.".format(len(face_locations)))
face_encodings = face_recognition.face_encodings(output, face_locations)
# Loop over each face found in the frame to see if it's someone we know.
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
match = face_recognition.compare_faces([obama_face_encoding], face_encoding)
name = "<Unknown Person>"
if match[0]:
name = "Barack Obama"
print("I see someone named {}!".format(name))

View File

@ -0,0 +1,46 @@
# 这是一个在树莓派上运行人脸识别的案例
# 本案例会在命令行控制面板上输出识别出的人脸数量和身份结果。
# 你需要一个2代以上的树莓派并在树莓派上安装face_recognition并连接上picamera摄像头
# 并确保picamera这个模块已经安装树莓派一般会内置安装
# 你可以参考这个教程配制你的树莓派:
# https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65
import face_recognition
import picamera
import numpy as np
# 你需要在sudo raspi-config中把camera功能打开
camera = picamera.PiCamera()
camera.resolution = (320, 240)
output = np.empty((240, 320, 3), dtype=np.uint8)
# 载入样本图片(奥巴马和拜登)
print("Loading known face image(s)")
obama_image = face_recognition.load_image_file("obama_small.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# 初始化变量
face_locations = []
face_encodings = []
while True:
print("Capturing image.")
# 以numpy array的数据结构从picamera摄像头中获取一帧图片
camera.capture(output, format="rgb")
# 获得所有人脸的位置以及它们的编码
face_locations = face_recognition.face_locations(output)
print("Found {} faces in image.".format(len(face_locations)))
face_encodings = face_recognition.face_encodings(output, face_locations)
# 将每一个人脸与已知样本图片比对
for face_encoding in face_encodings:
# 看是否属于奥巴马或者拜登
match = face_recognition.compare_faces([obama_face_encoding], face_encoding)
name = "<Unknown Person>"
if match[0]:
name = "Barack Obama"
print("I see someone named {}!".format(name))

View File

@ -0,0 +1,55 @@
import face_recognition
import cv2
# This code finds all faces in a list of images using the CNN model.
#
# This demo is for the _special case_ when you need to find faces in LOTS of images very quickly and all the images
# are the exact same size. This is common in video processing applications where you have lots of video frames
# to process.
#
# If you are processing a lot of images and using a GPU with CUDA, batch processing can be ~3x faster then processing
# single images at a time. But if you aren't using a GPU, then batch processing isn't going to be very helpful.
#
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read the video file.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
# Open video file
video_capture = cv2.VideoCapture("short_hamilton_clip.mp4")
frames = []
frame_count = 0
while video_capture.isOpened():
# Grab a single frame of video
ret, frame = video_capture.read()
# Bail out when the video file ends
if not ret:
break
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
frame = frame[:, :, ::-1]
# Save each frame of the video to a list
frame_count += 1
frames.append(frame)
# Every 128 frames (the default batch size), batch process the list of frames to find faces
if len(frames) == 128:
batch_of_face_locations = face_recognition.batch_face_locations(frames, number_of_times_to_upsample=0)
# Now let's list all the faces we found in all 128 frames
for frame_number_in_batch, face_locations in enumerate(batch_of_face_locations):
number_of_faces_in_frame = len(face_locations)
frame_number = frame_count - 128 + frame_number_in_batch
print("I found {} face(s) in frame #{}.".format(number_of_faces_in_frame, frame_number))
for face_location in face_locations:
# Print the location of each face in this frame
top, right, bottom, left = face_location
print(" - A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
# Clear the frames array to start the next batch
frames = []

View File

@ -0,0 +1,23 @@
from PIL import Image
import face_recognition
# Load the jpg file into a numpy array
image = face_recognition.load_image_file("biden.jpg")
# Find all the faces in the image using the default HOG-based model.
# This method is fairly accurate, but not as accurate as the CNN model and not GPU accelerated.
# See also: find_faces_in_picture_cnn.py
face_locations = face_recognition.face_locations(image)
print("I found {} face(s) in this photograph.".format(len(face_locations)))
for face_location in face_locations:
# Print the location of each face in this image
top, right, bottom, left = face_location
print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
# You can access the actual face itself like this:
face_image = image[top:bottom, left:right]
pil_image = Image.fromarray(face_image)
pil_image.show()

View File

@ -0,0 +1,25 @@
from PIL import Image
import face_recognition
# Load the jpg file into a numpy array
image = face_recognition.load_image_file("biden.jpg")
# Find all the faces in the image using a pre-trained convolutional neural network.
# This method is more accurate than the default HOG model, but it's slower
# unless you have an nvidia GPU and dlib compiled with CUDA extensions. But if you do,
# this will use GPU acceleration and perform well.
# See also: find_faces_in_picture.py
face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=0, model="cnn")
print("I found {} face(s) in this photograph.".format(len(face_locations)))
for face_location in face_locations:
# Print the location of each face in this image
top, right, bottom, left = face_location
print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
# You can access the actual face itself like this:
face_image = image[top:bottom, left:right]
pil_image = Image.fromarray(face_image)
pil_image.show()

View File

@ -0,0 +1,27 @@
from PIL import Image, ImageDraw
import face_recognition
# Load the jpg file into a numpy array
image = face_recognition.load_image_file("two_people.jpg")
# Find all facial features in all the faces in the image
face_landmarks_list = face_recognition.face_landmarks(image)
print("I found {} face(s) in this photograph.".format(len(face_landmarks_list)))
# Create a PIL imagedraw object so we can draw on the picture
pil_image = Image.fromarray(image)
d = ImageDraw.Draw(pil_image)
for face_landmarks in face_landmarks_list:
# Print the location of each facial feature in this image
for facial_feature in face_landmarks.keys():
print("The {} in this face has the following points: {}".format(facial_feature, face_landmarks[facial_feature]))
# Let's trace out each facial feature in the image with a line!
for facial_feature in face_landmarks.keys():
d.line(face_landmarks[facial_feature], width=5)
# Show the picture
pil_image.show()

View File

@ -0,0 +1,73 @@
import face_recognition
from PIL import Image, ImageDraw
import numpy as np
# This is an example of running face recognition on a single image
# and drawing a box around each person that was identified.
# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
obama_face_encoding,
biden_face_encoding
]
known_face_names = [
"Barack Obama",
"Joe Biden"
]
# Load an image with an unknown face
unknown_image = face_recognition.load_image_file("two_people.jpg")
# Find all the faces and face encodings in the unknown image
face_locations = face_recognition.face_locations(unknown_image)
face_encodings = face_recognition.face_encodings(unknown_image, face_locations)
# Convert the image to a PIL-format image so that we can draw on top of it with the Pillow library
# See http://pillow.readthedocs.io/ for more about PIL/Pillow
pil_image = Image.fromarray(unknown_image)
# Create a Pillow ImageDraw Draw instance to draw with
draw = ImageDraw.Draw(pil_image)
# Loop through each face found in the unknown image
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# If a match was found in known_face_encodings, just use the first one.
# if True in matches:
# first_match_index = matches.index(True)
# name = known_face_names[first_match_index]
# Or instead, use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
# Draw a box around the face using the Pillow module
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255))
# Draw a label with a name below the face
text_width, text_height = draw.textsize(name)
draw.rectangle(((left, bottom - text_height - 10), (right, bottom)), fill=(0, 0, 255), outline=(0, 0, 255))
draw.text((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255))
# Remove the drawing library from memory as per the Pillow docs
del draw
# Display the resulting image
pil_image.show()
# You can also save a copy of the new image to disk if you want by uncommenting this line
# pil_image.save("image_with_boxes.jpg")

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 756 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 546 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 345 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 492 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 378 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -0,0 +1,29 @@
import face_recognition
# Load the jpg files into numpy arrays
biden_image = face_recognition.load_image_file("biden.jpg")
obama_image = face_recognition.load_image_file("obama.jpg")
unknown_image = face_recognition.load_image_file("obama2.jpg")
# Get the face encodings for each face in each image file
# Since there could be more than one face in each image, it returns a list of encodings.
# But since I know each image only has one face, I only care about the first encoding in each image, so I grab index 0.
try:
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
unknown_face_encoding = face_recognition.face_encodings(unknown_image)[0]
except IndexError:
print("I wasn't able to locate any faces in at least one of the images. Check the image files. Aborting...")
quit()
known_faces = [
biden_face_encoding,
obama_face_encoding
]
# results is an array of True/False telling if the unknown face matched anyone in the known_faces array
results = face_recognition.compare_faces(known_faces, unknown_face_encoding)
print("Is the unknown face a picture of Biden? {}".format(results[0]))
print("Is the unknown face a picture of Obama? {}".format(results[1]))
print("Is the unknown face a new person that we've never seen before? {}".format(not True in results))

Binary file not shown.

After

Width:  |  Height:  |  Size: 476 KiB

View File

@ -0,0 +1,113 @@
# This is a _very simple_ example of a web service that recognizes faces in uploaded images.
# Upload an image file and it will check if the image contains a picture of Barack Obama.
# The result is returned as json. For example:
#
# $ curl -XPOST -F "file=@obama2.jpg" http://127.0.0.1:5001
#
# Returns:
#
# {
# "face_found_in_image": true,
# "is_picture_of_obama": true
# }
#
# This example is based on the Flask file upload example: http://flask.pocoo.org/docs/0.12/patterns/fileuploads/
# NOTE: This example requires flask to be installed! You can install it with pip:
# $ pip3 install flask
import face_recognition
from flask import Flask, jsonify, request, redirect
# You can change this to any folder on your system
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}
app = Flask(__name__)
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
@app.route('/', methods=['GET', 'POST'])
def upload_image():
# Check if a valid image file was uploaded
if request.method == 'POST':
if 'file' not in request.files:
return redirect(request.url)
file = request.files['file']
if file.filename == '':
return redirect(request.url)
if file and allowed_file(file.filename):
# The image file seems valid! Detect faces and return the result.
return detect_faces_in_image(file)
# If no valid image file was uploaded, show the file upload form:
return '''
<!doctype html>
<title>Is this a picture of Obama?</title>
<h1>Upload a picture and see if it's a picture of Obama!</h1>
<form method="POST" enctype="multipart/form-data">
<input type="file" name="file">
<input type="submit" value="Upload">
</form>
'''
def detect_faces_in_image(file_stream):
# Pre-calculated face encoding of Obama generated with face_recognition.face_encodings(img)
known_face_encoding = [-0.09634063, 0.12095481, -0.00436332, -0.07643753, 0.0080383,
0.01902981, -0.07184699, -0.09383309, 0.18518871, -0.09588896,
0.23951106, 0.0986533 , -0.22114635, -0.1363683 , 0.04405268,
0.11574756, -0.19899382, -0.09597053, -0.11969153, -0.12277931,
0.03416885, -0.00267565, 0.09203379, 0.04713435, -0.12731361,
-0.35371891, -0.0503444 , -0.17841317, -0.00310897, -0.09844551,
-0.06910533, -0.00503746, -0.18466514, -0.09851682, 0.02903969,
-0.02174894, 0.02261871, 0.0032102 , 0.20312519, 0.02999607,
-0.11646006, 0.09432904, 0.02774341, 0.22102901, 0.26725179,
0.06896867, -0.00490024, -0.09441824, 0.11115381, -0.22592428,
0.06230862, 0.16559327, 0.06232892, 0.03458837, 0.09459756,
-0.18777156, 0.00654241, 0.08582542, -0.13578284, 0.0150229 ,
0.00670836, -0.08195844, -0.04346499, 0.03347827, 0.20310158,
0.09987706, -0.12370517, -0.06683611, 0.12704916, -0.02160804,
0.00984683, 0.00766284, -0.18980607, -0.19641446, -0.22800779,
0.09010898, 0.39178532, 0.18818057, -0.20875394, 0.03097027,
-0.21300618, 0.02532415, 0.07938635, 0.01000703, -0.07719778,
-0.12651891, -0.04318593, 0.06219772, 0.09163868, 0.05039065,
-0.04922386, 0.21839413, -0.02394437, 0.06173781, 0.0292527 ,
0.06160797, -0.15553983, -0.02440624, -0.17509389, -0.0630486 ,
0.01428208, -0.03637431, 0.03971229, 0.13983178, -0.23006812,
0.04999552, 0.0108454 , -0.03970895, 0.02501768, 0.08157793,
-0.03224047, -0.04502571, 0.0556995 , -0.24374914, 0.25514284,
0.24795187, 0.04060191, 0.17597422, 0.07966681, 0.01920104,
-0.01194376, -0.02300822, -0.17204897, -0.0596558 , 0.05307484,
0.07417042, 0.07126575, 0.00209804]
# Load the uploaded image file
img = face_recognition.load_image_file(file_stream)
# Get face encodings for any faces in the uploaded image
unknown_face_encodings = face_recognition.face_encodings(img)
face_found = False
is_obama = False
if len(unknown_face_encodings) > 0:
face_found = True
# See if the first face in the uploaded image matches the known face of Obama
match_results = face_recognition.compare_faces([known_face_encoding], unknown_face_encodings[0])
if match_results[0]:
is_obama = True
# Return the result as json
result = {
"face_found_in_image": face_found,
"is_picture_of_obama": is_obama
}
return jsonify(result)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5001, debug=True)

View File

@ -0,0 +1,110 @@
# 这是一个非常简单的使用Web服务上传图片运行人脸识别的案例后端服务器会识别这张图片是不是奥巴马并把识别结果以json键值对输出
# 比如:运行以下代码
# $ curl -XPOST -F "file=@obama2.jpg" http://127.0.0.1:5001
# 会返回:
# {
# "face_found_in_image": true,
# "is_picture_of_obama": true
# }
#
# 本项目基于Flask框架的案例 http://flask.pocoo.org/docs/0.12/patterns/fileuploads/
# 提示运行本案例需要安装Flask你可以用下面的代码安装Flask
# $ pip3 install flask
import face_recognition
from flask import Flask, jsonify, request, redirect
# You can change this to any folder on your system
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}
app = Flask(__name__)
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
@app.route('/', methods=['GET', 'POST'])
def upload_image():
# 检测图片是否上传成功
if request.method == 'POST':
if 'file' not in request.files:
return redirect(request.url)
file = request.files['file']
if file.filename == '':
return redirect(request.url)
if file and allowed_file(file.filename):
# 图片上传成功,检测图片中的人脸
return detect_faces_in_image(file)
# 图片上传失败输出以下html代码
return '''
<!doctype html>
<title>Is this a picture of Obama?</title>
<h1>Upload a picture and see if it's a picture of Obama!</h1>
<form method="POST" enctype="multipart/form-data">
<input type="file" name="file">
<input type="submit" value="Upload">
</form>
'''
def detect_faces_in_image(file_stream):
# 用face_recognition.face_encodings(img)接口提前把奥巴马人脸的编码录入
known_face_encoding = [-0.09634063, 0.12095481, -0.00436332, -0.07643753, 0.0080383,
0.01902981, -0.07184699, -0.09383309, 0.18518871, -0.09588896,
0.23951106, 0.0986533 , -0.22114635, -0.1363683 , 0.04405268,
0.11574756, -0.19899382, -0.09597053, -0.11969153, -0.12277931,
0.03416885, -0.00267565, 0.09203379, 0.04713435, -0.12731361,
-0.35371891, -0.0503444 , -0.17841317, -0.00310897, -0.09844551,
-0.06910533, -0.00503746, -0.18466514, -0.09851682, 0.02903969,
-0.02174894, 0.02261871, 0.0032102 , 0.20312519, 0.02999607,
-0.11646006, 0.09432904, 0.02774341, 0.22102901, 0.26725179,
0.06896867, -0.00490024, -0.09441824, 0.11115381, -0.22592428,
0.06230862, 0.16559327, 0.06232892, 0.03458837, 0.09459756,
-0.18777156, 0.00654241, 0.08582542, -0.13578284, 0.0150229 ,
0.00670836, -0.08195844, -0.04346499, 0.03347827, 0.20310158,
0.09987706, -0.12370517, -0.06683611, 0.12704916, -0.02160804,
0.00984683, 0.00766284, -0.18980607, -0.19641446, -0.22800779,
0.09010898, 0.39178532, 0.18818057, -0.20875394, 0.03097027,
-0.21300618, 0.02532415, 0.07938635, 0.01000703, -0.07719778,
-0.12651891, -0.04318593, 0.06219772, 0.09163868, 0.05039065,
-0.04922386, 0.21839413, -0.02394437, 0.06173781, 0.0292527 ,
0.06160797, -0.15553983, -0.02440624, -0.17509389, -0.0630486 ,
0.01428208, -0.03637431, 0.03971229, 0.13983178, -0.23006812,
0.04999552, 0.0108454 , -0.03970895, 0.02501768, 0.08157793,
-0.03224047, -0.04502571, 0.0556995 , -0.24374914, 0.25514284,
0.24795187, 0.04060191, 0.17597422, 0.07966681, 0.01920104,
-0.01194376, -0.02300822, -0.17204897, -0.0596558 , 0.05307484,
0.07417042, 0.07126575, 0.00209804]
# 载入用户上传的图片
img = face_recognition.load_image_file(file_stream)
# 为用户上传的图片中的人脸编码
unknown_face_encodings = face_recognition.face_encodings(img)
face_found = False
is_obama = False
if len(unknown_face_encodings) > 0:
face_found = True
# 看看图片中的第一张脸是不是奥巴马
match_results = face_recognition.compare_faces([known_face_encoding], unknown_face_encodings[0])
if match_results[0]:
is_obama = True
# 讲识别结果以json键值对的数据结构输出
result = {
"face_found_in_image": face_found,
"is_picture_of_obama": is_obama
}
return jsonify(result)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5001, debug=True)

View File

@ -0,0 +1,7 @@
# -*- coding: utf-8 -*-
__author__ = """Adam Geitgey"""
__email__ = 'ageitgey@gmail.com'
__version__ = '1.4.0'
from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance

View File

@ -0,0 +1,226 @@
# -*- coding: utf-8 -*-
import PIL.Image
import dlib
import numpy as np
from PIL import ImageFile
try:
import face_recognition_models
except Exception:
print("Please install `face_recognition_models` with this command before using `face_recognition`:\n")
print("pip install git+https://github.com/ageitgey/face_recognition_models")
quit()
ImageFile.LOAD_TRUNCATED_IMAGES = True
face_detector = dlib.get_frontal_face_detector()
predictor_68_point_model = face_recognition_models.pose_predictor_model_location()
pose_predictor_68_point = dlib.shape_predictor(predictor_68_point_model)
predictor_5_point_model = face_recognition_models.pose_predictor_five_point_model_location()
pose_predictor_5_point = dlib.shape_predictor(predictor_5_point_model)
cnn_face_detection_model = face_recognition_models.cnn_face_detector_model_location()
cnn_face_detector = dlib.cnn_face_detection_model_v1(cnn_face_detection_model)
face_recognition_model = face_recognition_models.face_recognition_model_location()
face_encoder = dlib.face_recognition_model_v1(face_recognition_model)
def _rect_to_css(rect):
"""
Convert a dlib 'rect' object to a plain tuple in (top, right, bottom, left) order
:param rect: a dlib 'rect' object
:return: a plain tuple representation of the rect in (top, right, bottom, left) order
"""
return rect.top(), rect.right(), rect.bottom(), rect.left()
def _css_to_rect(css):
"""
Convert a tuple in (top, right, bottom, left) order to a dlib `rect` object
:param css: plain tuple representation of the rect in (top, right, bottom, left) order
:return: a dlib `rect` object
"""
return dlib.rectangle(css[3], css[0], css[1], css[2])
def _trim_css_to_bounds(css, image_shape):
"""
Make sure a tuple in (top, right, bottom, left) order is within the bounds of the image.
:param css: plain tuple representation of the rect in (top, right, bottom, left) order
:param image_shape: numpy shape of the image array
:return: a trimmed plain tuple representation of the rect in (top, right, bottom, left) order
"""
return max(css[0], 0), min(css[1], image_shape[1]), min(css[2], image_shape[0]), max(css[3], 0)
def face_distance(face_encodings, face_to_compare):
"""
Given a list of face encodings, compare them to a known face encoding and get a euclidean distance
for each comparison face. The distance tells you how similar the faces are.
:param face_encodings: List of face encodings to compare
:param face_to_compare: A face encoding to compare against
:return: A numpy ndarray with the distance for each face in the same order as the 'faces' array
"""
if len(face_encodings) == 0:
return np.empty((0))
return np.linalg.norm(face_encodings - face_to_compare, axis=1)
def load_image_file(file, mode='RGB'):
"""
Loads an image file (.jpg, .png, etc) into a numpy array
:param file: image file name or file object to load
:param mode: format to convert the image to. Only 'RGB' (8-bit RGB, 3 channels) and 'L' (black and white) are supported.
:return: image contents as numpy array
"""
im = PIL.Image.open(file)
if mode:
im = im.convert(mode)
return np.array(im)
def _raw_face_locations(img, number_of_times_to_upsample=1, model="hog"):
"""
Returns an array of bounding boxes of human faces in a image
:param img: An image (as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:param model: Which face detection model to use. "hog" is less accurate but faster on CPUs. "cnn" is a more accurate
deep-learning model which is GPU/CUDA accelerated (if available). The default is "hog".
:return: A list of dlib 'rect' objects of found face locations
"""
if model == "cnn":
return cnn_face_detector(img, number_of_times_to_upsample)
else:
return face_detector(img, number_of_times_to_upsample)
def face_locations(img, number_of_times_to_upsample=1, model="hog"):
"""
Returns an array of bounding boxes of human faces in a image
:param img: An image (as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:param model: Which face detection model to use. "hog" is less accurate but faster on CPUs. "cnn" is a more accurate
deep-learning model which is GPU/CUDA accelerated (if available). The default is "hog".
:return: A list of tuples of found face locations in css (top, right, bottom, left) order
"""
if model == "cnn":
return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")]
else:
return [_trim_css_to_bounds(_rect_to_css(face), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, model)]
def _raw_face_locations_batched(images, number_of_times_to_upsample=1, batch_size=128):
"""
Returns an 2d array of dlib rects of human faces in a image using the cnn face detector
:param images: A list of images (each as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:return: A list of dlib 'rect' objects of found face locations
"""
return cnn_face_detector(images, number_of_times_to_upsample, batch_size=batch_size)
def batch_face_locations(images, number_of_times_to_upsample=1, batch_size=128):
"""
Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector
If you are using a GPU, this can give you much faster results since the GPU
can process batches of images at once. If you aren't using a GPU, you don't need this function.
:param images: A list of images (each as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:param batch_size: How many images to include in each GPU processing batch.
:return: A list of tuples of found face locations in css (top, right, bottom, left) order
"""
def convert_cnn_detections_to_css(detections):
return [_trim_css_to_bounds(_rect_to_css(face.rect), images[0].shape) for face in detections]
raw_detections_batched = _raw_face_locations_batched(images, number_of_times_to_upsample, batch_size)
return list(map(convert_cnn_detections_to_css, raw_detections_batched))
def _raw_face_landmarks(face_image, face_locations=None, model="large"):
if face_locations is None:
face_locations = _raw_face_locations(face_image)
else:
face_locations = [_css_to_rect(face_location) for face_location in face_locations]
pose_predictor = pose_predictor_68_point
if model == "small":
pose_predictor = pose_predictor_5_point
return [pose_predictor(face_image, face_location) for face_location in face_locations]
def face_landmarks(face_image, face_locations=None, model="large"):
"""
Given an image, returns a dict of face feature locations (eyes, nose, etc) for each face in the image
:param face_image: image to search
:param face_locations: Optionally provide a list of face locations to check.
:param model: Optional - which model to use. "large" (default) or "small" which only returns 5 points but is faster.
:return: A list of dicts of face feature locations (eyes, nose, etc)
"""
landmarks = _raw_face_landmarks(face_image, face_locations, model)
landmarks_as_tuples = [[(p.x, p.y) for p in landmark.parts()] for landmark in landmarks]
# For a definition of each point index, see https://cdn-images-1.medium.com/max/1600/1*AbEg31EgkbXSQehuNJBlWg.png
if model == 'large':
return [{
"chin": points[0:17],
"left_eyebrow": points[17:22],
"right_eyebrow": points[22:27],
"nose_bridge": points[27:31],
"nose_tip": points[31:36],
"left_eye": points[36:42],
"right_eye": points[42:48],
"top_lip": points[48:55] + [points[64]] + [points[63]] + [points[62]] + [points[61]] + [points[60]],
"bottom_lip": points[54:60] + [points[48]] + [points[60]] + [points[67]] + [points[66]] + [points[65]] + [points[64]]
} for points in landmarks_as_tuples]
elif model == 'small':
return [{
"nose_tip": [points[4]],
"left_eye": points[2:4],
"right_eye": points[0:2],
} for points in landmarks_as_tuples]
else:
raise ValueError("Invalid landmarks model type. Supported models are ['small', 'large'].")
def face_encodings(face_image, known_face_locations=None, num_jitters=1, model="small"):
"""
Given an image, return the 128-dimension face encoding for each face in the image.
:param face_image: The image that contains one or more faces
:param known_face_locations: Optional - the bounding boxes of each face if you already know them.
:param num_jitters: How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower)
:param model: Optional - which model to use. "large" or "small" (default) which only returns 5 points but is faster.
:return: A list of 128-dimensional face encodings (one for each face in the image)
"""
raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model)
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
def compare_faces(known_face_encodings, face_encoding_to_check, tolerance=0.6):
"""
Compare a list of face encodings against a candidate encoding to see if they match.
:param known_face_encodings: A list of known face encodings
:param face_encoding_to_check: A single face encoding to compare against the list
:param tolerance: How much distance between faces to consider it a match. Lower is more strict. 0.6 is typical best performance.
:return: A list of True/False values indicating which known_face_encodings match the face encoding to check
"""
return list(face_distance(known_face_encodings, face_encoding_to_check) <= tolerance)

View File

@ -0,0 +1,72 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
import click
import os
import re
import face_recognition.api as face_recognition
import multiprocessing
import sys
import itertools
def print_result(filename, location):
top, right, bottom, left = location
print("{},{},{},{},{}".format(filename, top, right, bottom, left))
def test_image(image_to_check, model, upsample):
unknown_image = face_recognition.load_image_file(image_to_check)
face_locations = face_recognition.face_locations(unknown_image, number_of_times_to_upsample=upsample, model=model)
for face_location in face_locations:
print_result(image_to_check, face_location)
def image_files_in_folder(folder):
return [os.path.join(folder, f) for f in os.listdir(folder) if re.match(r'.*\.(jpg|jpeg|png)', f, flags=re.I)]
def process_images_in_process_pool(images_to_check, number_of_cpus, model, upsample):
if number_of_cpus == -1:
processes = None
else:
processes = number_of_cpus
# macOS will crash due to a bug in libdispatch if you don't use 'forkserver'
context = multiprocessing
if "forkserver" in multiprocessing.get_all_start_methods():
context = multiprocessing.get_context("forkserver")
pool = context.Pool(processes=processes)
function_parameters = zip(
images_to_check,
itertools.repeat(model),
itertools.repeat(upsample),
)
pool.starmap(test_image, function_parameters)
@click.command()
@click.argument('image_to_check')
@click.option('--cpus', default=1, help='number of CPU cores to use in parallel. -1 means "use all in system"')
@click.option('--model', default="hog", help='Which face detection model to use. Options are "hog" or "cnn".')
@click.option('--upsample', default=0, help='How many times to upsample the image looking for faces. Higher numbers find smaller faces.')
def main(image_to_check, cpus, model, upsample):
# Multi-core processing only supported on Python 3.4 or greater
if (sys.version_info < (3, 4)) and cpus != 1:
click.echo("WARNING: Multi-processing support requires Python 3.4 or greater. Falling back to single-threaded processing!")
cpus = 1
if os.path.isdir(image_to_check):
if cpus == 1:
[test_image(image_file, model, upsample) for image_file in image_files_in_folder(image_to_check)]
else:
process_images_in_process_pool(image_files_in_folder(image_to_check), cpus, model, upsample)
else:
test_image(image_to_check, model, upsample)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,119 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
import click
import os
import re
import face_recognition.api as face_recognition
import multiprocessing
import itertools
import sys
import PIL.Image
import numpy as np
def scan_known_people(known_people_folder):
known_names = []
known_face_encodings = []
for file in image_files_in_folder(known_people_folder):
basename = os.path.splitext(os.path.basename(file))[0]
img = face_recognition.load_image_file(file)
encodings = face_recognition.face_encodings(img)
if len(encodings) > 1:
click.echo("WARNING: More than one face found in {}. Only considering the first face.".format(file))
if len(encodings) == 0:
click.echo("WARNING: No faces found in {}. Ignoring file.".format(file))
else:
known_names.append(basename)
known_face_encodings.append(encodings[0])
return known_names, known_face_encodings
def print_result(filename, name, distance, show_distance=False):
if show_distance:
print("{},{},{}".format(filename, name, distance))
else:
print("{},{}".format(filename, name))
def test_image(image_to_check, known_names, known_face_encodings, tolerance=0.6, show_distance=False):
unknown_image = face_recognition.load_image_file(image_to_check)
# Scale down image if it's giant so things run a little faster
if max(unknown_image.shape) > 1600:
pil_img = PIL.Image.fromarray(unknown_image)
pil_img.thumbnail((1600, 1600), PIL.Image.LANCZOS)
unknown_image = np.array(pil_img)
unknown_encodings = face_recognition.face_encodings(unknown_image)
for unknown_encoding in unknown_encodings:
distances = face_recognition.face_distance(known_face_encodings, unknown_encoding)
result = list(distances <= tolerance)
if True in result:
[print_result(image_to_check, name, distance, show_distance) for is_match, name, distance in zip(result, known_names, distances) if is_match]
else:
print_result(image_to_check, "unknown_person", None, show_distance)
if not unknown_encodings:
# print out fact that no faces were found in image
print_result(image_to_check, "no_persons_found", None, show_distance)
def image_files_in_folder(folder):
return [os.path.join(folder, f) for f in os.listdir(folder) if re.match(r'.*\.(jpg|jpeg|png)', f, flags=re.I)]
def process_images_in_process_pool(images_to_check, known_names, known_face_encodings, number_of_cpus, tolerance, show_distance):
if number_of_cpus == -1:
processes = None
else:
processes = number_of_cpus
# macOS will crash due to a bug in libdispatch if you don't use 'forkserver'
context = multiprocessing
if "forkserver" in multiprocessing.get_all_start_methods():
context = multiprocessing.get_context("forkserver")
pool = context.Pool(processes=processes)
function_parameters = zip(
images_to_check,
itertools.repeat(known_names),
itertools.repeat(known_face_encodings),
itertools.repeat(tolerance),
itertools.repeat(show_distance)
)
pool.starmap(test_image, function_parameters)
@click.command()
@click.argument('known_people_folder')
@click.argument('image_to_check')
@click.option('--cpus', default=1, help='number of CPU cores to use in parallel (can speed up processing lots of images). -1 means "use all in system"')
@click.option('--tolerance', default=0.6, help='Tolerance for face comparisons. Default is 0.6. Lower this if you get multiple matches for the same person.')
@click.option('--show-distance', default=False, type=bool, help='Output face distance. Useful for tweaking tolerance setting.')
def main(known_people_folder, image_to_check, cpus, tolerance, show_distance):
known_names, known_face_encodings = scan_known_people(known_people_folder)
# Multi-core processing only supported on Python 3.4 or greater
if (sys.version_info < (3, 4)) and cpus != 1:
click.echo("WARNING: Multi-processing support requires Python 3.4 or greater. Falling back to single-threaded processing!")
cpus = 1
if os.path.isdir(image_to_check):
if cpus == 1:
[test_image(image_file, known_names, known_face_encodings, tolerance, show_distance) for image_file in image_files_in_folder(image_to_check)]
else:
process_images_in_process_pool(image_files_in_folder(image_to_check), known_names, known_face_encodings, cpus, tolerance, show_distance)
else:
test_image(image_to_check, known_names, known_face_encodings, tolerance, show_distance)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,2 @@
[build-system]
requires = ["setuptools", "wheel"]

View File

@ -0,0 +1,6 @@
face_recognition_models
Click>=6.0
dlib>=19.3.0
numpy
Pillow
scipy>=0.17.0

View File

@ -0,0 +1,15 @@
pip==21.1
bumpversion==0.5.3
wheel==0.29.0
watchdog==0.8.3
flake8
tox==2.3.1
coverage==4.1
Sphinx==1.4.8
cryptography==3.3.2
pyyaml>=4.2b1
face_recognition_models
Click>=6.0
dlib>=19.3.0
numpy
scipy

View File

@ -0,0 +1,31 @@
[bumpversion]
current_version = 1.4.0
commit = True
tag = True
[bumpversion:file:setup.py]
search = version='{current_version}'
replace = version='{new_version}'
[bumpversion:file:face_recognition/__init__.py]
search = __version__ = '{current_version}'
replace = __version__ = '{new_version}'
[bdist_wheel]
universal = 1
[flake8]
exclude =
.github,
.idea,
.eggs,
examples,
docs,
.tox,
bin,
dist,
tools,
*.egg-info,
__init__.py,
*.yml
max-line-length = 160

View File

@ -0,0 +1,64 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from setuptools import setup
with open('README.rst') as readme_file:
readme = readme_file.read()
with open('HISTORY.rst') as history_file:
history = history_file.read()
requirements = [
'face_recognition_models>=0.3.0',
'Click>=6.0',
'dlib>=19.7',
'numpy',
'Pillow'
]
test_requirements = [
'tox',
'flake8'
]
setup(
name='face_recognition',
version='1.4.0',
description="Recognize faces from Python or from the command line",
long_description=readme + '\n\n' + history,
author="Adam Geitgey",
author_email='ageitgey@gmail.com',
url='https://github.com/ageitgey/face_recognition',
packages=[
'face_recognition',
],
package_dir={'face_recognition': 'face_recognition'},
package_data={
'face_recognition': ['models/*.dat']
},
entry_points={
'console_scripts': [
'face_recognition=face_recognition.face_recognition_cli:main',
'face_detection=face_recognition.face_detection_cli:main'
]
},
install_requires=requirements,
license="MIT license",
zip_safe=False,
keywords='face_recognition',
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
test_suite='tests',
tests_require=test_requirements
)

View File

@ -0,0 +1 @@
# -*- coding: utf-8 -*-

View File

@ -0,0 +1,344 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_face_recognition
----------------------------------
Tests for `face_recognition` module.
"""
import unittest
import os
import numpy as np
from click.testing import CliRunner
from face_recognition import api
from face_recognition import face_recognition_cli
from face_recognition import face_detection_cli
class Test_face_recognition(unittest.TestCase):
def test_load_image_file(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
self.assertEqual(img.shape, (1137, 910, 3))
def test_load_image_file_32bit(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', '32bit.png'))
self.assertEqual(img.shape, (1200, 626, 3))
def test_raw_face_locations(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
detected_faces = api._raw_face_locations(img)
self.assertEqual(len(detected_faces), 1)
self.assertEqual(detected_faces[0].top(), 142)
self.assertEqual(detected_faces[0].bottom(), 409)
def test_cnn_raw_face_locations(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
detected_faces = api._raw_face_locations(img, model="cnn")
self.assertEqual(len(detected_faces), 1)
self.assertAlmostEqual(detected_faces[0].rect.top(), 144, delta=25)
self.assertAlmostEqual(detected_faces[0].rect.bottom(), 389, delta=25)
def test_raw_face_locations_32bit_image(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', '32bit.png'))
detected_faces = api._raw_face_locations(img)
self.assertEqual(len(detected_faces), 1)
self.assertEqual(detected_faces[0].top(), 290)
self.assertEqual(detected_faces[0].bottom(), 558)
def test_cnn_raw_face_locations_32bit_image(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', '32bit.png'))
detected_faces = api._raw_face_locations(img, model="cnn")
self.assertEqual(len(detected_faces), 1)
self.assertAlmostEqual(detected_faces[0].rect.top(), 259, delta=25)
self.assertAlmostEqual(detected_faces[0].rect.bottom(), 552, delta=25)
def test_face_locations(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
detected_faces = api.face_locations(img)
self.assertEqual(len(detected_faces), 1)
self.assertEqual(detected_faces[0], (142, 617, 409, 349))
def test_cnn_face_locations(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
detected_faces = api.face_locations(img, model="cnn")
self.assertEqual(len(detected_faces), 1)
self.assertAlmostEqual(detected_faces[0][0], 144, delta=25)
self.assertAlmostEqual(detected_faces[0][1], 608, delta=25)
self.assertAlmostEqual(detected_faces[0][2], 389, delta=25)
self.assertAlmostEqual(detected_faces[0][3], 363, delta=25)
def test_partial_face_locations(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama_partial_face.jpg'))
detected_faces = api.face_locations(img)
self.assertEqual(len(detected_faces), 1)
self.assertEqual(detected_faces[0], (142, 191, 365, 0))
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama_partial_face2.jpg'))
detected_faces = api.face_locations(img)
self.assertEqual(len(detected_faces), 1)
self.assertEqual(detected_faces[0], (142, 551, 409, 349))
def test_raw_face_locations_batched(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
images = [img, img, img]
batched_detected_faces = api._raw_face_locations_batched(images, number_of_times_to_upsample=0)
for detected_faces in batched_detected_faces:
self.assertEqual(len(detected_faces), 1)
self.assertEqual(detected_faces[0].rect.top(), 154)
self.assertEqual(detected_faces[0].rect.bottom(), 390)
def test_batched_face_locations(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
images = [img, img, img]
batched_detected_faces = api.batch_face_locations(images, number_of_times_to_upsample=0)
for detected_faces in batched_detected_faces:
self.assertEqual(len(detected_faces), 1)
self.assertEqual(detected_faces[0], (154, 611, 390, 375))
def test_raw_face_landmarks(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
face_landmarks = api._raw_face_landmarks(img)
example_landmark = face_landmarks[0].parts()[10]
self.assertEqual(len(face_landmarks), 1)
self.assertEqual(face_landmarks[0].num_parts, 68)
self.assertEqual((example_landmark.x, example_landmark.y), (552, 399))
def test_face_landmarks(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
face_landmarks = api.face_landmarks(img)
self.assertEqual(
set(face_landmarks[0].keys()),
set(['chin', 'left_eyebrow', 'right_eyebrow', 'nose_bridge',
'nose_tip', 'left_eye', 'right_eye', 'top_lip',
'bottom_lip']))
self.assertEqual(
face_landmarks[0]['chin'],
[(369, 220), (372, 254), (378, 289), (384, 322), (395, 353),
(414, 382), (437, 407), (464, 424), (495, 428), (527, 420),
(552, 399), (576, 372), (594, 344), (604, 314), (610, 282),
(613, 250), (615, 219)])
def test_face_landmarks_small_model(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
face_landmarks = api.face_landmarks(img, model="small")
self.assertEqual(
set(face_landmarks[0].keys()),
set(['nose_tip', 'left_eye', 'right_eye']))
self.assertEqual(face_landmarks[0]['nose_tip'], [(496, 295)])
def test_face_encodings(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
encodings = api.face_encodings(img)
self.assertEqual(len(encodings), 1)
self.assertEqual(len(encodings[0]), 128)
def test_face_encodings_large_model(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
encodings = api.face_encodings(img, model='large')
self.assertEqual(len(encodings), 1)
self.assertEqual(len(encodings[0]), 128)
def test_face_distance(self):
img_a1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
img_a2 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama2.jpg'))
img_a3 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama3.jpg'))
img_b1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg'))
face_encoding_a1 = api.face_encodings(img_a1)[0]
face_encoding_a2 = api.face_encodings(img_a2)[0]
face_encoding_a3 = api.face_encodings(img_a3)[0]
face_encoding_b1 = api.face_encodings(img_b1)[0]
faces_to_compare = [
face_encoding_a2,
face_encoding_a3,
face_encoding_b1]
distance_results = api.face_distance(faces_to_compare, face_encoding_a1)
# 0.6 is the default face distance match threshold. So we'll spot-check that the numbers returned
# are above or below that based on if they should match (since the exact numbers could vary).
self.assertEqual(type(distance_results), np.ndarray)
self.assertLessEqual(distance_results[0], 0.6)
self.assertLessEqual(distance_results[1], 0.6)
self.assertGreater(distance_results[2], 0.6)
def test_face_distance_empty_lists(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg'))
face_encoding = api.face_encodings(img)[0]
# empty python list
faces_to_compare = []
distance_results = api.face_distance(faces_to_compare, face_encoding)
self.assertEqual(type(distance_results), np.ndarray)
self.assertEqual(len(distance_results), 0)
# empty numpy list
faces_to_compare = np.array([])
distance_results = api.face_distance(faces_to_compare, face_encoding)
self.assertEqual(type(distance_results), np.ndarray)
self.assertEqual(len(distance_results), 0)
def test_compare_faces(self):
img_a1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg'))
img_a2 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama2.jpg'))
img_a3 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama3.jpg'))
img_b1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg'))
face_encoding_a1 = api.face_encodings(img_a1)[0]
face_encoding_a2 = api.face_encodings(img_a2)[0]
face_encoding_a3 = api.face_encodings(img_a3)[0]
face_encoding_b1 = api.face_encodings(img_b1)[0]
faces_to_compare = [
face_encoding_a2,
face_encoding_a3,
face_encoding_b1]
match_results = api.compare_faces(faces_to_compare, face_encoding_a1)
self.assertEqual(type(match_results), list)
self.assertTrue(match_results[0])
self.assertTrue(match_results[1])
self.assertFalse(match_results[2])
def test_compare_faces_empty_lists(self):
img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg'))
face_encoding = api.face_encodings(img)[0]
# empty python list
faces_to_compare = []
match_results = api.compare_faces(faces_to_compare, face_encoding)
self.assertEqual(type(match_results), list)
self.assertListEqual(match_results, [])
# empty numpy list
faces_to_compare = np.array([])
match_results = api.compare_faces(faces_to_compare, face_encoding)
self.assertEqual(type(match_results), list)
self.assertListEqual(match_results, [])
def test_command_line_interface_options(self):
target_string = 'Show this message and exit.'
runner = CliRunner()
help_result = runner.invoke(face_recognition_cli.main, ['--help'])
self.assertEqual(help_result.exit_code, 0)
self.assertTrue(target_string in help_result.output)
def test_command_line_interface(self):
target_string = 'obama.jpg,obama'
runner = CliRunner()
image_folder = os.path.join(os.path.dirname(__file__), 'test_images')
image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')
result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file])
self.assertEqual(result.exit_code, 0)
self.assertTrue(target_string in result.output)
def test_command_line_interface_big_image(self):
target_string = 'obama3.jpg,obama'
runner = CliRunner()
image_folder = os.path.join(os.path.dirname(__file__), 'test_images')
image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama3.jpg')
result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file])
self.assertEqual(result.exit_code, 0)
self.assertTrue(target_string in result.output)
def test_command_line_interface_tolerance(self):
target_string = 'obama.jpg,obama'
runner = CliRunner()
image_folder = os.path.join(os.path.dirname(__file__), 'test_images')
image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')
result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file, "--tolerance", "0.55"])
self.assertEqual(result.exit_code, 0)
self.assertTrue(target_string in result.output)
def test_command_line_interface_show_distance(self):
target_string = 'obama.jpg,obama,0.0'
runner = CliRunner()
image_folder = os.path.join(os.path.dirname(__file__), 'test_images')
image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')
result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file, "--show-distance", "1"])
self.assertEqual(result.exit_code, 0)
self.assertTrue(target_string in result.output)
def test_fd_command_line_interface_options(self):
target_string = 'Show this message and exit.'
runner = CliRunner()
help_result = runner.invoke(face_detection_cli.main, ['--help'])
self.assertEqual(help_result.exit_code, 0)
self.assertTrue(target_string in help_result.output)
def test_fd_command_line_interface(self):
runner = CliRunner()
image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')
result = runner.invoke(face_detection_cli.main, args=[image_file])
self.assertEqual(result.exit_code, 0)
parts = result.output.split(",")
self.assertTrue("obama.jpg" in parts[0])
self.assertEqual(len(parts), 5)
def test_fd_command_line_interface_folder(self):
runner = CliRunner()
image_file = os.path.join(os.path.dirname(__file__), 'test_images')
result = runner.invoke(face_detection_cli.main, args=[image_file])
self.assertEqual(result.exit_code, 0)
self.assertTrue("obama_partial_face2.jpg" in result.output)
self.assertTrue("obama.jpg" in result.output)
self.assertTrue("obama2.jpg" in result.output)
self.assertTrue("obama3.jpg" in result.output)
self.assertTrue("biden.jpg" in result.output)
def test_fd_command_line_interface_hog_model(self):
target_string = 'obama.jpg'
runner = CliRunner()
image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')
result = runner.invoke(face_detection_cli.main, args=[image_file, "--model", "hog"])
self.assertEqual(result.exit_code, 0)
self.assertTrue(target_string in result.output)
def test_fd_command_line_interface_cnn_model(self):
target_string = 'obama.jpg'
runner = CliRunner()
image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')
result = runner.invoke(face_detection_cli.main, args=[image_file, "--model", "cnn"])
self.assertEqual(result.exit_code, 0)
self.assertTrue(target_string in result.output)

Some files were not shown because too many files have changed in this diff Show More