EnlightenGAN: Deep Light Enhancement without Paired Supervision
pip install -r requirement.txt
mkdir model
Download VGG pretrained model from [Google Drive 1], [2] and then put them into the directory model
.
Before starting training process, you should launch the visdom.server
for visualizing.
nohup python -m visdom.server -port=8097
then run the following command
python scripts/script.py --train
python scipts/script.py --predict
Training data [Google Drive] (unpaired images collected from multiple datasets)
Testing data [Google Drive] (including LIME, MEF, NPE, VV, DICP)
If you find this work useful for you, please cite
@article{jiang2019enlightengan,
title={EnlightenGAN: Deep Light Enhancement without Paired Supervision},
author={Jiang, Yifan and Gong, Xinyu and Liu, Ding and Cheng, Yu and Fang, Chen and Shen, Xiaohui and Yang, Jianchao and Zhou, Pan and Wang, Zhangyang},
journal={arXiv preprint arXiv:1906.06972},
year={2019}
}