You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the experiment, I evaluated the same model on both the HM3D-val v2 and HM3D-val v1 datasets. The results on the v2 dataset were significantly worse compared to those on the v1 dataset. Upon closer examination, I identified a potential issue: when the model predicts that it should stop, the distance to the goal consistently falls between 0.1m and 0.2m on the v2 dataset. Could this discrepancy arise from differences in dataset characteristics, annotation protocols, or inherent model biases when applied to the updated dataset version? I would appreciate your help in investigating this issue.
The text was updated successfully, but these errors were encountered:
In the experiment, I evaluated the same model on both the HM3D-val v2 and HM3D-val v1 datasets. The results on the v2 dataset were significantly worse compared to those on the v1 dataset. Upon closer examination, I identified a potential issue: when the model predicts that it should stop, the distance to the goal consistently falls between 0.1m and 0.2m on the v2 dataset. Could this discrepancy arise from differences in dataset characteristics, annotation protocols, or inherent model biases when applied to the updated dataset version? I would appreciate your help in investigating this issue.
The text was updated successfully, but these errors were encountered: