Partition-Guided GANs CVPR | Arxiv | Video link
Mohammad Reza Armandpour*,
Ali Sadeghian*,
Chunyuan Li,
Mingyuan Zhou
CVPR 2021
Our proposed fully unsupervised image generation model, PGMGAN, learns to partition the space based on semantic similarity and generate images from each partition to reduce mode collapse and mode connecting. We propose a novel partitioner/guide method that guarantees to provide direction to the generators to lead them to their designated region. scan_guide_biggan
- Clone this repo:
git clone https://github.com/alisadeghian/PGMGAN.git
cd PGMGAN
- Install the dependencies
conda create --name PGMGAN python=3.7
conda activate PGMGAN
conda install --file requirements.txt
conda install -c conda-forge tensorboardx
- Train a model on CIFAR:
python train.py configs/cifar/scan_guide_biggan.yaml
- Visualize samples and inferred clusters:
python visualize_clusters.py configs/cifar/scan_guide_biggan.yaml --show_clusters
The samples and clusters will be saved to output/cifar/scan_guide_biggan/clusters
.
- Evaluate the model's FID: You will need to first gather a set of ground truth train set images to compute metrics against.
python utils/get_gt_imgs.py --cifar
Then, run the evaluation script:
python metrics.py configs/cifar/scan_guide_biggan.yaml --fid --every -1
You can also evaluate with other metrics by appending additional flags, such as Inception Score (--inception
), the number of covered modes + reverse-KL divergence (--modes
), and cluster metrics (--cluster_metrics
).
Appologise for the inconvenience, we lost access to the server where we store the pretrained models. We will be re-running and uploading them soon. EDIT: CIFAR models added.
You can download pretrained models on CIFAR from here and place them in the output/cifar/scan_guide_biggan/chkpts/
directory.
To reproduce the results in the paper use the following command:
python metrics.py configs/cifar/scan_guide_biggan.yaml --fid --every -1
To visualize generated samples and inferred clusters, run
python visualize_clusters.py config-file
You can set the flag --show_clusters
to also visualize the real inferred clusters, but this requires that you have a path to training set images.
To obtain generation metrics, fill in the path to your ImageNet or Places dataset directories in utils/get_gt_imgs.py
and then run
python utils/get_gt_imgs.py --imagenet --places
to precompute batches of GT images for FID/FSD evaluation.
Then, you can use
python metrics.py config-file
with the appropriate flags compute the FID (--fid
), FSD (--fsd
), IS (--inception
), number of modes covered/ reverse-KL divergence (--modes
) and clustering metrics (--cluster_metrics
) for each of the checkpoints.
This code is heavily based on the GAN-stability and self-cond-gan code bases. Our FSD code is taken from the GANseeing work. To compute inception score, we use the code provided from Shichang Tang. To compute FID, we use the code provided from TTUR. We also use pretrained classifiers given by the pytorch-playground.
We thank all the authors for their useful code.
If you use this code for your research, please cite the following work.
@inproceedings{armandpour2021partition,
title={Partition-Guided GANs},
author={Armandpour, Mohammadreza and Sadeghian, Ali and Li, Chunyuan and Zhou, Mingyuan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5099--5109},
year={2021}
}