You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Notice
Thanks to your great work! However, when reproducing this work, the model accuracy always falls short of the reported performance in the paper.
With the sampler set to Random, the highest reproducible performance is: ('bbox_AP', 0.19), ('bbox_AP50', 0.294), ('bbox_AP75', 0.201), ('bbox_APs', 0.124), ('bbox_APm', 0.254), ('bbox_APl', 0.292), ('bbox_APr', 0.117), ('bbox_APc', 0.187), ('bbox_APf', 0.225).
When the sampler is set to RFS, the highest reproducible performance is: ('bbox_AP', 0.144), ('bbox_AP50', 0.227), ('bbox_AP75', 0.16), ('bbox_APs', 0.103), ('bbox_APm', 0.181), ('bbox_APl', 0.193), ('bbox_APr', 0.097), ('bbox_APc', 0.137), ('bbox_APf', 0.172)
The experimental setup is as follows: 4 NVIDIA 2080Ti GPUs, with samples_per_gpu=2 for each GPU, and FP16 training was used. All other experimental settings are exactly the same as those in the original experiment.
Could you please provide any insight into the reasons for this discrepancy, and whether the authors could provide a pre-trained model link for testing?
Thank you very much.
The text was updated successfully, but these errors were encountered:
Notice
Thanks to your great work! However, when reproducing this work, the model accuracy always falls short of the reported performance in the paper.
With the sampler set to Random, the highest reproducible performance is: ('bbox_AP', 0.19), ('bbox_AP50', 0.294), ('bbox_AP75', 0.201), ('bbox_APs', 0.124), ('bbox_APm', 0.254), ('bbox_APl', 0.292), ('bbox_APr', 0.117), ('bbox_APc', 0.187), ('bbox_APf', 0.225).
When the sampler is set to RFS, the highest reproducible performance is: ('bbox_AP', 0.144), ('bbox_AP50', 0.227), ('bbox_AP75', 0.16), ('bbox_APs', 0.103), ('bbox_APm', 0.181), ('bbox_APl', 0.193), ('bbox_APr', 0.097), ('bbox_APc', 0.137), ('bbox_APf', 0.172)
The experimental setup is as follows: 4 NVIDIA 2080Ti GPUs, with samples_per_gpu=2 for each GPU, and FP16 training was used. All other experimental settings are exactly the same as those in the original experiment.
Could you please provide any insight into the reasons for this discrepancy, and whether the authors could provide a pre-trained model link for testing?
Thank you very much.
The text was updated successfully, but these errors were encountered: