Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce the Inception score from trained model #9

Closed
ghost opened this issue Dec 24, 2019 · 7 comments
Closed

Cannot reproduce the Inception score from trained model #9

ghost opened this issue Dec 24, 2019 · 7 comments

Comments

@ghost
Copy link

ghost commented Dec 24, 2019

Hi,
I have found that I can't reproduce the Inception score of 128 size image even though I have used the model provided by the author.

I only get 9.8 instead of 10.4 as the paper said. And it get lower (8.9) number if i use sample_features=1 and sample feature from features_clustered_100.npy as the paper said.

Btw, I get 24.02 of gt images which is closed to the number provided by the paper. I didn't try 256*256.

for your convenience, I provide the command and the step below.

The step:
1、use sample_images.py to generate images of val set
python sample_images.py --checkpoint 'the path of model' --output 'path to save images'

2、use the code from as the author said: here

@ghost
Copy link
Author

ghost commented Dec 24, 2019

@ashual any idea for that?

@ashual
Copy link
Owner

ashual commented Dec 27, 2019

@viperit it seems I've messed up the dataset when refactoring coco.py file
I'll soon revert the changes

@ghost
Copy link
Author

ghost commented Dec 27, 2019

Thx!
I am also open to the issue Also cannot reproduce the result said in paper train from scratch.. Please let me know after you revert the changes!
Thank you very much!

@ashual
Copy link
Owner

ashual commented Jan 1, 2020

@viperit it should be fine now.
Basically I've reverted coco.py to the old version for reproducing the original val/test splits.
And when using GT masks, taking the original resolution masks.

These are the commands for reproducing the results:
python scripts/sample_images.py --checkpoint models/128/checkpoint_with_model.pt --batch_size 12 --output_dir results --model_mode eval --save_layout 1 --accuracy_model_path models/resnet101_172_classes_128_0.623.pth --image_size 128,128
and
python scripts/sample_images.py --checkpoint models/128/checkpoint_with_model.pt --batch_size 12 --output_dir results --model_mode eval --save_layout 1 --accuracy_model_path models/resnet101_172_classes_128_0.623.pth --image_size 128,128 --use_gt_boxes 1 --use_gt_masks 1

The model is non-deterministic which means that the results can vary (both up and down).

In this test, we have used features_clustered_001.npy which gives steady results (less variations when you run the test)

@ghost
Copy link
Author

ghost commented Jan 2, 2020

@viperit it should be fine now.
Basically I've reverted coco.py to the old version for reproducing the original val/test splits.
And when using GT masks, taking the original resolution masks.

These are the commands for reproducing the results:
python scripts/sample_images.py --checkpoint models/128/checkpoint_with_model.pt --batch_size 12 --output_dir results --model_mode eval --save_layout 1 --accuracy_model_path models/resnet101_172_classes_128_0.623.pth --image_size 128,128
and
python scripts/sample_images.py --checkpoint models/128/checkpoint_with_model.pt --batch_size 12 --output_dir results --model_mode eval --save_layout 1 --accuracy_model_path models/resnet101_172_classes_128_0.623.pth --image_size 128,128 --use_gt_boxes 1 --use_gt_masks 1

The model is non-deterministic which means that the results can vary (both up and down).

In this test, we have used features_clustered_001.npy which gives steady results (less variations when you run the test)

Hi!
It worked! I think this issue can be closed right now.
Thank you very much. BTW, the Also cannot reproduce the result said in paper train from scratch have any result for now?

@ghost ghost closed this as completed Jan 2, 2020
@ashual
Copy link
Owner

ashual commented Jan 2, 2020

#10 will take some time since each model requires one week to train and some resources (GPU)

@ghost
Copy link
Author

ghost commented Jan 2, 2020

Ok!
Thx!

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant