Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model Reimplementation Accuracy #1

Open
yjjian5 opened this issue Dec 5, 2024 · 0 comments
Open

Model Reimplementation Accuracy #1

yjjian5 opened this issue Dec 5, 2024 · 0 comments

Comments

@yjjian5
Copy link

yjjian5 commented Dec 5, 2024

Notice
Thanks to your great work! However, when reproducing this work, the model accuracy always falls short of the reported performance in the paper.

With the sampler set to Random, the highest reproducible performance is: ('bbox_AP', 0.19), ('bbox_AP50', 0.294), ('bbox_AP75', 0.201), ('bbox_APs', 0.124), ('bbox_APm', 0.254), ('bbox_APl', 0.292), ('bbox_APr', 0.117), ('bbox_APc', 0.187), ('bbox_APf', 0.225).
When the sampler is set to RFS, the highest reproducible performance is: ('bbox_AP', 0.144), ('bbox_AP50', 0.227), ('bbox_AP75', 0.16), ('bbox_APs', 0.103), ('bbox_APm', 0.181), ('bbox_APl', 0.193), ('bbox_APr', 0.097), ('bbox_APc', 0.137), ('bbox_APf', 0.172)

The experimental setup is as follows: 4 NVIDIA 2080Ti GPUs, with samples_per_gpu=2 for each GPU, and FP16 training was used. All other experimental settings are exactly the same as those in the original experiment.

Could you please provide any insight into the reasons for this discrepancy, and whether the authors could provide a pre-trained model link for testing?

Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant