-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default setting for reproducing the result in your paper #9
Comments
@daib13. Hi Bin, I would really appreciate it if you could tell me the commands you really used in order to generate table 1 results. |
Hi @mmderakhshani, sorry for the late reply. For celeba, we just use the default setting in the repository. You can run |
@daib13 Thanks for your reply. Regarding table 2, did you calculate those scores with Resnet architecture or it is similar to table one and calculated using Infogan architecture? |
@mmderakhshani Table 2 is applied on WAE network which is defined in TwoStageVAE/network/two_stage_vae_model.py Line 194 in 8718623
We exactly follow the training protocol of the WAE paper. You can reproduce the result using the command
To calculate the FID score, we use the standard inception feature for both table 1 and table 2, which is also consistent to most of the previous works. The model is defined in https://github.com/openai/improved-gan/blob/master/inception_score/model.py. You can check how we calculate the FID score in Line 213 in 8718623
|
Hello, I really enjoy reading your article. It contains many interesting observations both theoretically and empirically. python demo.py --dataset cifar10 --epochs 1000 --lr-epochs 300 --epochs2 2000 --lr-epochs2 600 I believe that the above configuration is exactly the same with Appendix D, but even after saving and reloading them, the numbers are higher than the ones reported in Table 1. And, I’ve found that there exists a slight difference between Figure 16 in the arxiv version and the implementation. In the implementation, global averaging pooling was used instead of a flatten layer, which seems to be a minor difference. Thanks in advance! |
Hi. I would like to really appreciate your work. I have implemented you paper in PyTorch and now I am trying to reproduce your paper results in (Table 1). May I ask you to tell me what are the default settings for CelebA and Cifar10 with which you used to train the TwoStageVAE? In your paper, you referred us to a paper that introduces some hyperparameter settings, not all of them and I think they are incomplete.
The text was updated successfully, but these errors were encountered: