Skip to content

Code for “Auto-Embedding Generative Adversarial Networks for High Resolution Image Synthesis”

Notifications You must be signed in to change notification settings

guoyongcs/AEGAN

Repository files navigation

AEGAN

Pytorch implementation for “Auto-Embedding Generative Adversarial Networks for High Resolution Image Synthesis”

Dependencies

Python 2.7

Pytorch

Dataset

In our paper, to sample different images, we train our model on five datasets, respectively.

Training

  • Train AEGAN on Oxford-102 Flowers dataset.
python train.py --dataset flowers --dataroot your_images_folder --batchSize 16 --imageSize 512 --niter_stage1 100 --niter_stage2 1000 --cuda --outf your_images_output_folder --gpu 3
  • If you want to train the model on Caltech-UCSD Birds (CUB), Large-scale CelebFaces Attributes (CelebA), Large-scale Scene Understanding (LSUN) or your own dataset. Just replace the hyperparameter like these:
python train.py --dataset name_of_dataset --dataroot path_of_dataset

The details of the model

To synthesize embedding of 32x32x64, we use a generator GE (left) and a discriminator DE (right) with four convoluion layers, respectively.

The structure of the auto-encoder model contains a encoder H (righ) and a decoder F (left).

The sturcture of denoiser network includes a encoder-decoder network (left) and a discriminator DR (right).

Citation

If you use any part of this code in your research, please cite our paper:

@article{guo2019auto,
  title={Auto-Embedding Generative Adversarial Networks for High Resolution Image Synthesis},
  author={Guo, Yong and Chen, Qi and Chen, Jian and Wu, Qingyao and Shi, Qinfeng and Tan, Mingkui},
  journal={IEEE Transactions on Multimedia},
  year={2019},
  publisher={IEEE}
}

About

Code for “Auto-Embedding Generative Adversarial Networks for High Resolution Image Synthesis”

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages