Skip to content

lisha-chen/Deep-structured-facial-landmark-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep-structured-facial-landmark-detection

This is the official implementation for the paper "Deep structured prediction for facial landmark detection"

Requirement

python 3.7 tensorflow 1.15 numpy scipy

Pretrained models

Data for evaluation

Link to data folder Download this folder to replace the data folder in the repository (since some of the files are too large to be included in the repository). Note the images used are the original images provided in the official websites listed below without any preprocessing (preprocessing such as croping and resizing is done in the evaluation code).

Note: the image paths and ground truth labels are stored in the .mat and the .tfrecords file. You don't need .mat files to run the code. The .mat files are only a direct guidance of how to store the images.

Links to datasets

300W menpo COFW 300VW

Note: please follow the instructions on the official websites of the datasets for copyright and license information, etc.

License

This code is only for research purpose. Please follow the GPL-3.0 License if you use the code.

Citation

@incollection{NIPS2019_8515,
    title =     {Deep Structured Prediction for Facial Landmark Detection},
    author =    {Chen, Lisha and Su, Hui and Ji, Qiang},
    booktitle = {Advances in Neural Information Processing Systems 32},
    pages =     {2450--2460},
    year =      {2019},
    publisher = {Curran Associates, Inc.},
    url = {http://papers.nips.cc/paper/8515-deep-structured-prediction-for-facial-landmark-detection.pdf}
  }

Acknowledgement

  • The CNN backbone uses FAN. Our CNN backbone is a direct tensorflow reimplementation of the provided pytorch code.
  • The 3D model construction uses non-rigid structure from motion and CE-CLM.
  • The 300wlptrain protocol uses 300W-LP for pre-training.

We thank the authors for providing the code and data. Please cite their works and ours if you use the code or data.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published