Skip to content
/ PGDiff Public

[NeurIPS 2023] PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance

License

Notifications You must be signed in to change notification settings

pq-yang/PGDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

14 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance

1S-Lab, Nanyang Technological Universityโ€ƒ 2SenseTime Research, Singaporeโ€ƒ
๐Ÿšฉ Accepted to NeurIPS 2023

โ€ข [arXiv] โ€ข

PGDiff builds a versatile framework that is applicable to a broad range of face restoration tasks.

If you find PGDiff helpful to your projects, please consider โญ this repo. Thanks! ๐Ÿ˜‰

๐Ÿ“• Supported Applications
  • Blind Restoration
  • Colorization
  • Inpainting
  • Reference-based Restoration
  • Old Photo Restoration (w/ scratches)
  • [TODO] Natural Image Restoration

  • ๐Ÿ“ฎ Updates

    • 2023.12.04: Add an option to speed up the inference process by adjusting the number of denoising steps.
    • 2023.10.10: Release our codes and models. Have fun! ๐Ÿ˜‹
    • 2023.08.16: This repo is created.

    โ™ฆ๏ธ Installation

    Codes and Environment

    # git clone this repository
    git clone https://github.com/pq-yang/PGDiff.git
    cd PGDiff
    
    # create new anaconda env
    conda create -n pgdiff python=3.8 -y
    conda activate pgdiff
    
    # install python dependencies
    conda install mpi4py
    pip3 install -r requirements.txt
    pip install -e .
    

    Pretrained Model

    Download the pretrained face diffusion model from [Google Drive | BaiduPan (pw: pgdf)] to the models folder (credit to DifFace).

    ๐ŸŽช Applications

    ๐Ÿš€ ๐Ÿš€ If you want to speed up the inference process, you may choose to shorten the number of DDPM denoising steps by specifying a smaller --timestep_respacing argument (e.g., --timestep_respacing ddpm200 for 200 timesteps). We recommend at least 100 steps for a decent result.

    Blind Restoration

    To extract smooth semantics from the input images, download the pretrained restorer from [Google Drive | BaiduPan (pw: pgdf)] to the models/restorer folder. The pretrained restorer provided here is modified from the $\times$ 1 generator of Real-ESRGAN. Note that the pretrained restorer can also be flexibly replaced with other restoration models by modifying the create_restorer function and specifying your own --restorer_path accordingly.

    Commands

    Guidance scale $s$ for BFR is generally taken from [0.05, 0.1]. Smaller $s$ tends to produce a higher-quality result, while larger $s$ yields a higher-fidelity result.

    For cropped and aligned faces (512x512):

    python inference_pgdiff.py --task restoration --in_dir [image folder] --out_dir [result folder] --restorer_path [restorer path] --guidance_scale [s] --timestep_respacing ddpm[steps]
    

    Example:

    python inference_pgdiff.py --task restoration --in_dir testdata/cropped_faces --out_dir results/blind_restoration --guidance_scale 0.05
    

    Colorization

    We provide a set of color statistics in the adaptive_instance_normalization function as the default colorization style. One may change the colorization style by running the script scripts/color_stat_calculation.py to obtain target statistics (avg mean & avg std) and replace those in the adaptive_instance_normalization function.

    Commands

    For cropped and aligned faces (512x512):

    python inference_pgdiff.py --task colorization --in_dir [image folder] --out_dir [result folder] --lightness_weight [w_l] --color_weight [w_c] --guidance_scale [s] --timestep_respacing ddpm[steps]
    

    Example:

    ๐ŸŒˆ Try different color styles for various outputs!

    # style 0 (default)
    python inference_pgdiff.py --task colorization --in_dir testdata/grayscale_faces --out_dir results/colorization --guidance_scale 0.01
    
    # style 1 (uncomment line 272-273 in `guided_diffusion/script_util.py`)
    python inference_pgdiff.py --task colorization --in_dir testdata/grayscale_faces --out_dir results/colorization_style1 --guidance_scale 0.01
    
    # style 3 (uncomment line 278-279 in `guided_diffusion/script_util.py`)
    python inference_pgdiff.py --task colorization --in_dir testdata/grayscale_faces --out_dir results/colorization_style3 --guidance_scale 0.01
    

    Inpainting

    A folder for mask(s) mask_dir must be specified with each mask image name corresponding to each input (masked) image. Each input mask shoud be a binary map with white pixels representing masked regions (refer to testdata/append_masks). We also provide a script scripts/irregular_mask_gen.py to randomly generate irregular stroke masks on input images.

    โ— Note: If you don't specify mask_dir, we will automatically treat the input image as if there are no missing pixels.

    Commands

    For cropped and aligned faces (512x512):

    python inference_pgdiff.py --task inpainting --in_dir [image folder] --mask_dir [mask folder] --out_dir [result folder] --unmasked_weight [w_um] --guidance_scale [s] --timestep_respacing ddpm[steps]
    

    Example:

    ๐ŸŒˆ Try different seeds for various outputs!

    python inference_pgdiff.py --task inpainting --in_dir testdata/masked_faces --mask_dir testdata/append_masks --out_dir results/inpainting --guidance_scale 0.01 --seed 4321
    

    Reference-based Restoration

    To extract identity features from both the reference image and the intermediate results, download the pretrained ArcFace model from [Google Drive | BaiduPan (pw: pgdf)] to the models folder.

    A folder for reference image(s) ref_dir must be specified with each reference image name corresponding to each input image. A reference image is suggested to be a high-quality image from the same identity as the input low-quality image. Test image pairs we provided here are from the CelebRef-HQ dataset.

    Commands

    Similar to blind face restoration, reference-based restoration requires to tune the guidance scale $s$ according to the input quality, which is generally taken from [0.05, 0.1].

    For cropped and aligned faces (512x512):

    python inference_pgdiff.py --task ref_restoration --in_dir [image folder] --ref_dir [reference folder] --out_dir [result folder] --ss_weight [w_ss] --ref_weight [w_ref] --guidance_scale [s] --timestep_respacing ddpm[steps]
    

    Example:

    # Choice 1: MSE Loss (default)
    python inference_pgdiff.py --task ref_restoration --in_dir testdata/ref_cropped_faces --ref_dir testdata/ref_faces --out_dir results/ref_restoration_mse --guidance_scale 0.05 --ref_weight 25
    
    # Choice 2: Cosine Similarity Loss (uncomment line 71-72)
    python inference_pgdiff.py --task ref_restoration --in_dir testdata/ref_cropped_faces --ref_dir testdata/ref_faces --out_dir results/ref_restoration_cos --guidance_scale 0.05 --ref_weight 1e4
    

    Old Photo Restoration

    • If scratches exist, a folder for mask(s) mask_dir must be specified with the name of each mask image corresponding to that of each input image. Each input mask shoud be a binary map with white pixels representing masked regions. To obtain a scratch map automatically, we recommend using the scratch detection model from Bringing Old Photo Back to Life. One may also generate or adjust the scratch map with an image editing app (e.g., Photoshop).

    • If scratches don't exist, set the mask_dir augment as None (default). As a result, if you don't specify mask_dir, we will automatically treat the input image as if there are no missing pixels.

    Commands

    For cropped and aligned faces (512x512):

    python inference_pgdiff.py --task old_photo_restoration --in_dir [image folder] --mask_dir [mask folder] --out_dir [result folder] --op_lightness_weight [w_op_l] --op_color_weight [w_op_c] --guidance_scale [s] --timestep_respacing ddpm[steps]
    

    Demos

    Similar to blind face restoration, old photo restoration is a more complex task (restoration + colorization + inpainting) that requires to tune the guidance scale $s$ according to the input quality. Generally๏ผŒ$s$ is taken from [0.0015, 0.005]. Smaller $s$ tends to produce a higher-quality result, while larger $s$ yields a higher-fidelity result.

    ๐Ÿ”ฅ Degradation: Light
    # no scratches (don't specify mask_dir)
    python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/lg --out_dir results/op_restoration/lg --guidance_scale 0.004 --seed 4321
    

    ๐Ÿ”ฅ๐Ÿ”ฅ Degradation: Medium
    # no scratches (don't specify mask_dir)
    python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/med --out_dir results/op_restoration/med --guidance_scale 0.002 --seed 1234
    
    # with scratches
    python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/med_scratch --mask_dir testdata/op_mask --out_dir results/op_restoration/med_scratch --guidance_scale 0.002 --seed 1111
    

    ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Degradation: Heavy
    python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/hv --mask_dir testdata/op_mask --out_dir results/op_restoration/hv --guidance_scale 0.0015 --seed 4321
    

    ๐ŸŒˆ Customize your results with different color styles!

    # style 1 (uncomment line 272-273 in `guided_diffusion/script_util.py`)
    python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/hv --mask_dir testdata/op_mask --out_dir results/op_restoration/hv_style1 --guidance_scale 0.0015 --seed 4321
    
    # style 2 (uncomment line 275-276 in `guided_diffusion/script_util.py`)
    python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/hv --mask_dir testdata/op_mask --out_dir results/op_restoration/hv_style2 --guidance_scale 0.0015 --seed 4321
    

    ๐Ÿ‘ Citation

    If you find our work useful for your research, please consider citing:

    @inproceedings{yang2023pgdiff,
      title={{PGDiff}: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance},
      author={Yang, Peiqing and Zhou, Shangchen and Tao, Qingyi and Loy, Chen Change},
      booktitle={NeurIPS},
      year={2023}
    }
    

    ๐Ÿ‘‰ License

    This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.

    ๐Ÿ‘ Acknowledgement

    This study is supported under the RIE2020 Industry Alignment Fund โ€“ Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

    This implementation is based on guided-diffusion. We also adopt the pretrained face diffusion model from DifFace, the pretrained identity feature extraction model from ArcFace, and the restorer backbone from Real-ESRGAN. Thanks for their awesome works!

    โ˜Ž๏ธ Contact

    If you have any questions, please feel free to reach out at peiqingyang99@outlook.com.

    About

    [NeurIPS 2023] PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance

    Resources

    License

    Stars

    Watchers

    Forks

    Releases

    No releases published

    Packages

    No packages published

    Languages