Skip to content

👩 2021-1 Parametric Control of Portrait Image

Notifications You must be signed in to change notification settings

lilyjeon13/Deep3dPortrait

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep 3d Portrait from a Single Image (CVPR2020)

Presentation

Summary

This repostiory is forked from sicxu/Deep3dPortrait.

Use custom image to generate 3d portrait objects of various expressions

git clone https://github.com/lylajeon/Deep3dPortrait Deep3dPortrait
cd Deep3dPortrait
git clone https://github.com/kingsj0405/Face-Landmark-Parsing face-parsing.PyTorch
cd ..

Put all files in outputs/step4 to step5_ui_expression&pose_change/result

python integrated_process.py --input_img [IMG_NAME]
cd step5_ui_expression&pose_change
chmod +x run.sh
./run.sh

Origin README.md of Deep 3d Portrait from a Single Image (CVPR2020)

This is a tensorflow implementation of the following paper: Deep 3d Portrait from a Single Image. We propose a two-step geometry learning scheme which first learn 3DMM face reconstruction from single images then learn to estimate hair and ear depth in a stereo setup.

Getting Started

System Requirements

  • Software: Ubuntu 16.04, CUDA 9.0
  • Python >= 3.5

Usage

  1. Clone the repository
git clone https://github.com/sicxu/Deep3dPortrait.git
cd Deep3dPortrait
pip install -r requirements.txt
  1. Follow the intructions in Deep3DFaceReconstruction to prepare the BFM folder
  2. Download the pretrained face reconstruction model and depth estimation model, then put the pb files into the model folder.
  3. Run the following steps.
python step1_recon_3d_face.py
python step2_face_segmentation.py
python step3_get_head_geometry.py
python step4_save_obj.py
  1. To check the results, see ./output subfolders which contain the results of corresponding steps.

Others

  • An image pre-alignment is necessary for face reconstruction. We recommend using Bulat et al.'s method to get facial landmarks (3D definition). We also need to use the masks of face, hair and ear as input to the depth estimation network. We recommend using Lin et al.'s method for semantic segmentation.
  • The face reconstruction code is heavily borrowed from Deep3DFaceReconstruction.
  • The render code is modified from tf_mesh_render. Note that the renderer we complied does not support other tensorflow versions and can only be used on Linux.
  • The manipulation code will not be released. If you want to make a comparison with our method, please use the results in our paper, or you can contact me(sicheng_xu@yeah.net) for more comparisons.

Citation

If you find this code helpful for your research, please cite our paper.

About

👩 2021-1 Parametric Control of Portrait Image

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages