deepface is a lightweight facial analysis framework including face recognition and demography (age, gender, emotion and race) for Python. You can apply facial analysis with a few lines of code. It plans to bridge a gap between software engineering and machine learning studies.
The easiest way to install deepface is to download it from PyPI
.
pip install deepface
A modern face recognition pipeline consists of 4 common stages: detect, align, represent and verify. DeepFace handles all these common stages in the background.
Face Verification - Demo
Verification function under the DeepFace interface offers a single face recognition.
from deepface import DeepFace
result = DeepFace.verify("img1.jpg", "img2.jpg")
print("Is verified: ", result["verified"])
Each call of verification function builds a face recognition model from scratch and this is a costly operation. If you are going to verify multiple faces sequentially, then you should pass an array of faces to verification function to speed the operation up. In this way, complex face recognition models will be built once.
dataset = [
['dataset/img1.jpg', 'dataset/img2.jpg'],
['dataset/img1.jpg', 'dataset/img3.jpg']
]
resp_obj = DeepFace.verify(dataset)
Items of resp_obj might be unsorted when you pass multiple instances to verify function. Please check the item indexes in the response object.
Large scale face recognition - Demo
You can apply face recognition on a large scale data set as well. Face recognition requires to apply face verification multiple times. Herein, deepface offers an out-of-the-box find function to handle this action. Representations of faces photos in your database folder will be stored in a pickle file when find function is called once. Then, deepface just finds representation of the target image. In this way, finding an identity in a large scale data set will be performed in just seconds.
from deepface import DeepFace
import pandas as pd
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
#dfs = DeepFace.find(img_path = ["img1.jpg", "img2.jpg"], db_path = "C:/workspace/my_db")
Supported face recognition models
Face recognition can be handled by different models. Currently, VGG-Face
, Google FaceNet
, OpenFace
, Facebook DeepFace
and DeepID
models are supported in deepface. The default configuration verifies faces with VGG-Face model. You can set the base model while verification as illustared below. Accuracy and speed show difference based on the performing model.
models = ["VGG-Face", "Facenet", "OpenFace", "DeepFace", "DeepID"]
for model in models:
result = DeepFace.verify("img1.jpg", "img2.jpg", model_name = model)
The complexity and response time of each face recognition model is different so do accuracy scores. Mean ± std. dev. of 7 runs on CPU for each model in my experiments is illustrated in the following table.
Model | VGG-Face | OpenFace | Google FaceNet | Facebook DeepFace |
---|---|---|---|---|
Building | 2.35 s ± 46.9 ms | 6.37 s ± 1.28 s | 25.7 s ± 7.93 s | 23.9 s ± 2.52 s |
Verification | 897 ms ± 38.3 ms | 616 ms ± 12.1 ms | 684 ms ± 7.69 ms | 605 ms ± 13.2 ms |
Passing pre-built face recognition models
You can build a face recognition model once and pass this to verify function as well. This might be logical if you need to call verify function several times.
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
model = VGGFace.loadModel() #all face recognition models have loadModel() function in their interfaces
DeepFace.verify("img1.jpg", "img2.jpg", model_name = "VGG-Face", model = model)
Similarity
Face recognition models are regular convolutional neural networks and they are responsible to represent face photos as vectors. Decision of verification is based on the distance between vectors. We can classify pairs if its distance is less than a threshold.
Distance could be found by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration finds the cosine similarity. You can alternatively set the similarity metric while verification as demostratred below.
metrics = ["cosine", "euclidean", "euclidean_l2"]
for metric in metrics:
result = DeepFace.verify("img1.jpg", "img2.jpg", model_name = "VGG-Face", distance_metric = metric)
Ensemble learning for face recognition - Demo
A face recognition task can be handled by several models and similarity metrics. We can combine the precictions of all of those models and metrics to improve the accuracy of a face recognition task. This offers a huge improvement on accuracy, precision and recall but it runs much slower than single models.
resp_obj = DeepFace.verify("img1.jpg", "img2.jpg", model_name = "Ensemble")
df = DeepFace.find(img_path = "img1.jpg", db_path = "my_db", model_name = "Ensemble")
Facial Attribute Analysis - Demo
Deepface also offers facial attribute analysis including age
, gender
, facial expression
(including angry, fear, neutral, sad, disgust, happy and surprise)and race
(including asian, white, middle eastern, indian, latino and black) predictions. Analysis function under the DeepFace interface is used to find demography of a face.
from deepface import DeepFace
demography = DeepFace.analyze("img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])
#demographies = DeepFace.analyze(["img1.jpg", "img2.jpg", "img3.jpg"]) #analyzing multiple faces same time
print("Age: ", demography["age"])
print("Gender: ", demography["gender"])
print("Emotion: ", demography["dominant_emotion"])
print("Race: ", demography["dominant_race"])
Model building and prediction times are different for those facial analysis models. Mean ± std. dev. of 7 runs on CPU for each model in my experiments is illustrated in the following table.
Model | Emotion | Age | Gender | Race |
---|---|---|---|---|
Building | 243 ms ± 15.2 ms | 2.25 s ± 34.9 | 2.25 s ± 90.9 ms | 2.23 s ± 68.6 ms |
Prediction | 389 ms ± 11.4 ms | 524 ms ± 16.1 ms | 516 ms ± 10.8 ms | 493 ms ± 20.3 ms |
Passing pre-built facial analysis models
You can build facial attribute analysis models once and pass these to analyze function as well. This might be logical if you need to call analyze function several times.
import json
from deepface.extendedmodels import Age, Gender, Race, Emotion
models = {}
models["emotion"] = Emotion.loadModel()
models["age"] = Age.loadModel()
models["gender"] = Gender.loadModel()
models["race"] = Race.loadModel()
DeepFace.analyze("img1.jpg", models=models)
Streaming and Real Time Analysis - Demo
You can run deepface for real time videos as well.
Calling stream function under the DeepFace interface will access your webcam and apply both face recognition and facial attribute analysis. Stream function expects a database folder including face images. VGG-Face is the default face recognition model and cosine similarity is the default distance metric similar to verify function. The function starts to analyze if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.
from deepface import DeepFace
DeepFace.stream("/user/database")
Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
user
├── database
│ ├── Alice
│ │ ├── Alice1.jpg
│ │ ├── Alice2.jpg
│ ├── Bob
│ │ ├── Bob.jpg
BTW, you should use regular slash ( / ) instead of backslash ( \ ) in Windows OS while passing the path to stream function. E.g. DeepFace.stream("C:/User/Sefik/Desktop/database")
.
API - Demo
Deepface serves an API as well.
You can clone /api/api.py
and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
python api.py
The both face recognition and facial attribute analysis are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify
for face recognition and http://127.0.0.1:5000/analyze
for facial attribute analysis. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.
Deepface is mentioned in this playlist as video lectures. Subscribe the channel to stay up-to-date and be informed when a new lecture is added,
Reference face recognition models have different type of licenses. This framework is just a wrapper for those models. That's why, licence types are inherited as well. You should check the licenses for the face recognition models before use.
Herein, OpenFace is licensed under Apache License 2.0. FB DeepFace and Facenet is licensed under MIT License. The both Apache License 2.0 and MIT license types allow you to use for commercial purpose.
On the other hand, VGG-Face is licensed under Creative Commons Attribution License. That's why, it is restricted to adopt VGG-Face for commercial use.
There are many ways to support a project - starring⭐️ the GitHub repos is just one.
You can also support this project through Patreon.
Please cite deepface in your publications if it helps your research. Here is an example BibTeX entry:
@misc{serengil2020deepface,
abstract = {A Lightweight Face Recognition and Facial Attribute Analysis Framework for Python},
author={Serengil, Sefik Ilkin},
title={deepface},
url = {https://github.com/serengil/deepface},
year={2020}
}
Deepface is licensed under the MIT License - see LICENSE
for more details.
Logo is created by Adrien Coquet. Licensed under Creative Commons: By Attribution 3.0 License.