- video: Youtube link
- pdf: presentation.pdf
- final report: report link
This repostiory is forked from sicxu/Deep3dPortrait.
git clone https://github.com/lylajeon/Deep3dPortrait Deep3dPortrait
cd Deep3dPortrait
git clone https://github.com/kingsj0405/Face-Landmark-Parsing face-parsing.PyTorch
cd ..
Put all files in outputs/step4 to step5_ui_expression&pose_change/result
python integrated_process.py --input_img [IMG_NAME]
cd step5_ui_expression&pose_change
chmod +x run.sh
./run.sh
This is a tensorflow implementation of the following paper: Deep 3d Portrait from a Single Image. We propose a two-step geometry learning scheme which first learn 3DMM face reconstruction from single images then learn to estimate hair and ear depth in a stereo setup.
- Software: Ubuntu 16.04, CUDA 9.0
- Python >= 3.5
- Clone the repository
git clone https://github.com/sicxu/Deep3dPortrait.git
cd Deep3dPortrait
pip install -r requirements.txt
- Follow the intructions in Deep3DFaceReconstruction to prepare the BFM folder
- Download the pretrained face reconstruction model and depth estimation model, then put the pb files into the model folder.
- Run the following steps.
python step1_recon_3d_face.py
python step2_face_segmentation.py
python step3_get_head_geometry.py
python step4_save_obj.py
- To check the results, see ./output subfolders which contain the results of corresponding steps.
- An image pre-alignment is necessary for face reconstruction. We recommend using Bulat et al.'s method to get facial landmarks (3D definition). We also need to use the masks of face, hair and ear as input to the depth estimation network. We recommend using Lin et al.'s method for semantic segmentation.
- The face reconstruction code is heavily borrowed from Deep3DFaceReconstruction.
- The render code is modified from tf_mesh_render. Note that the renderer we complied does not support other tensorflow versions and can only be used on Linux.
- The manipulation code will not be released. If you want to make a comparison with our method, please use the results in our paper, or you can contact me(sicheng_xu@yeah.net) for more comparisons.
If you find this code helpful for your research, please cite our paper.