Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduction of the success rate on 'valid_seen' dataset #16

Open
jobin2725 opened this issue Sep 22, 2023 · 0 comments
Open

Reproduction of the success rate on 'valid_seen' dataset #16

jobin2725 opened this issue Sep 22, 2023 · 0 comments

Comments

@jobin2725
Copy link

Hi, thanks for your outstanding research!

I was going through the codes you uploaded but struggled to reproduce the reported success rate on the 'valid_seen' dataset.
Although it should show 47.8% success rate (which is reported in the paper), it only shows about 24.77% success rate.
INFO:root:266m 48s (- -63m 31s) (1311 130%) reward -0.0891, SR 0.2477, pws 0.1536

I guess I have properly downloaded the checkpoints of the et_checkpoint for the obj_predictor(maskrcnn_model.pth), questioner(questioner_anytime_finetuned.pt), and performer(latest.pth). When generating the lmdb datset only for the valid_seen dataset, there were no errors observed. (I haven't done the fine-tuning of the performer nor questioner yet.)

Could you suggest some methods for me to obtain the desired success results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant