Skip to content
/ BIR-D Public
forked from Tusiwei/BIR-D

[NeurIPS'24] Taming Generative Diffusion Prior for Universal Blind Image Restoration

Notifications You must be signed in to change notification settings

Fayeben/BIR-D

 
 

Repository files navigation

BIR-D: Taming Generative Diffusion Prior for Universal Blind Image Restoration

In this study, we aim to use a DDPM to learn the prior distribution of images and ultimately solve non-blind and blind problems in various image restoration tasks.

Siwei Tu1  Weidong Yang1,†  Ben Fei2,†
1Fudan University  2Chinese University of Hong Kong 

♦️ Checkpoints and Dataset

For real-world datasets, it is necessary to add degradation in advance and form a .npz file.

Tasks

The code is the October version.

🔥Blind Image Restoration

MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python deblurring.py \
$MODEL_FLAGS \
--save_dir [Path of storing output results]
--base_samples [Path of the npz file corresponding to the downloaded Imagenet dataset]

🔥Blind Face Restoration / Motion Blur Reduction

MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python blind_image_restoration.py \
$MODEL_FLAGS \
--save_dir [Path of storing output results]
--base_samples [Path of the blind image restoration dataset]

🔥Multi-Degradation Image Restoration

MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python multi_restoration.py \
$MODEL_FLAGS \
--save_dir [Path of storing output results]
--base_samples [Path of the multi-degradation dataset]

🔥Low-light Enhancement

MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python lowlight.py \
$MODEL_FLAGS \
--save_dir [Path of storing output results]
--base_samples [Path of the low-light enhancement dataset]

👏 Acknowledgement

The authors would like to thank Zhaoyang Lyu for his technical assistance. This work was supported by the National Natural Science Foundation of China (U2033209)

Our paper is inspired by:

Thanks for their awesome works!

About

[NeurIPS'24] Taming Generative Diffusion Prior for Universal Blind Image Restoration

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 86.0%
  • Python 14.0%