Main codes of CVPR2020 paper "Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing"
-
Python 3.6 (numpy, skimage, scipy)
-
TensorFlow >= 1.4
-
opencv2
-
Pillow (PIL)
-
easydict
You can use PRNet to generate the virtual depth maps.
For single-frame stage:
cd fas_sgtd_single_frame && bash train.sh
For multi-frame stage:
cd fas_sgtd_multi_frame && bash train.sh
We provide an example for Protocol 1 of OULU-NPU. You can download the models at BaiduDrive(pwd: luik) or GoogleDrive and put them into fas_sgtd_multi_frame/model_save/
.
cd fas_sgtd_multi_frame && bash test.sh
Code: under MIT license.
It is just for research purpose, and commercial use is not allowed
If you use this code, please consider citing:
@inproceedings{wang2020deep,
title={Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing},
author={Wang, Zezheng and Yu, Zitong and Zhao, Chenxu and Zhu, Xiangyu and Qin, Yunxiao and Zhou, Qiusheng and Zhou, Feng and Lei, Zhen},
booktitle= {CVPR},
year = {2020}
}
@inproceedings{yu2020searching,
title={Searching Central Difference Convolutional Networks for Face Anti-Spoofing},
author={Yu, Zitong and Zhao, Chenxu and Wang, Zezheng and Qin, Yunxiao and Su, Zhuo and Li, Xiaobai and Zhou, Feng and Zhao, Guoying},
booktitle= {CVPR},
year = {2020}
}
@inproceedings{qin2019learning,
title={Learning Meta Model for Zero-and Few-shot Face Anti-spoofing},
author={Qin, Yunxiao and Zhao, Chenxu and Zhu, Xiangyu and Wang, Zezheng and Yu, Zitong and Fu, Tianyu and Zhou, Feng and Shi, Jingping and Lei, Zhen},
booktitle= {AAAI},
year = {2020}
}