to be published.
- Python 3.7
- NVIDIA GPU + CUDA cuDNN 10.1
- PyTorch 1.8.1
- Releasing evaluation code.
- Releasing inference codes.
- Releasing pre-trained weights.
- Releasing training codes.
We use Places2, CelebA-HQ, and Paris Street-View datasets. Liu et al. provides 12k irregular masks as the testing mask.
- train the model
to be published.
- test the model
to be published.
to be published.
This code based on LGNet. The evaluation code is borrowed from TFill. Please consider to cite their papers.
@ARTICLE{9730792,
author={Quan, Weize and Zhang, Ruisong and Zhang, Yong and Li, Zhifeng and Wang, Jue and Yan, Dong-Ming},
journal={IEEE Transactions on Image Processing},
title={Image Inpainting With Local and Global Refinement},
year={2022},
volume={31},
pages={2405-2420}
}
@InProceedings{Zheng_2022_CVPR,
author = {Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei and Phung, Dinh},
title = {Bridging Global Context Interactions for High-Fidelity Image Completion},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {11512-11522}
}