-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing evaluation results #7
Comments
Hello, I wonder if you know how to solve this problem. |
I didn't fully reproduce the results yet since my time is currently more important to me than the accuracy of the model, but I did train the model for 10 more epochs starting from the checkpoint file (by setting resume_checkpoint in the config to True), and that did give me 67.55% for the beta_8 model. |
Thank you for your reply. Best wishes.
…---Original---
From: ***@***.***>
Date: Mon, Oct 18, 2021 19:15 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [RayDeeA/ibinn_imagenet] Reproducing evaluation results (#7)
I didn't fully reproduce the results yet since my time is currently more important to me than the accuracy of the model, but I did train the model for 10 more epochs starting from the checkpoint file (by setting resume_checkpoint in the config to True), and that did give me 67.55% for the beta_8 model.
So probably, the checkpoints that they shared are not the fully trained checkpoints from the paper.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Hi, how are evaluating the model? |
The way that it's mentioned in the readme |
Ok, that is strange. I’ll clone and check. |
Thank you! Please check it. I am looking forward to using your INN in our new work. |
Today I also get that beta_1: --- I get 25.30 % --- Should be 67.30 %. Could you please correct it in your busy schedule? |
I know this is an old issue and its possible the repo is no longer being maintained, however I am getting the same low performance results for the downloaded checkpoints (36.948% for beta_32). Is there any update on the discrepancy? |
I am also getting the same issue with the exact same accuracies reported in the issue. Conda (23.1.0) Environment:
The command I run to evaluate the model:
|
Hi! I'm having some trouble with reproducing your evaluation results. I've run your eval bash file for all models with the checkpoint files that I downloaded from the link in the Readme, but the results are very different from the ones mentioned in your paper.
These are the accuracies I get:
beta_1: --- I get 25.30 % --- Should be 67.30 %
beta_2: --- I get 0.56 % ---- Should be 71.73 %
beta_4: --- I get 0.21 % ---- Should be 73.69 %
beta_8: --- I get 0.12 % ---- Should be 74.59 %
beta_16: -- I get 0.76 % ---- Should be 75.54 %
beta_32: -- I get 36.81 % --- Should be 76.18 %
beta_inf: -- I get 74.40 % --- Should be 76.27 %
Do you have any idea why this might be? Because I would love to use your INN, but of course would need a better accuracy than what I am getting now.
The text was updated successfully, but these errors were encountered: