You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a few questions regarding the quantitative comparison with state-of-the-art (SOTA) salient object ranking methods, such as ASSR and ILRSR, as presented in Table-I (page 10):
You mentioned that all compared methods are retrained to ensure a fair comparison. Are the training and test splits the same for all three methods?
You indicated that you selected the model with (\gamma=0.2). This (\gamma) value should be related to the ground truth (GT) generation. Could you please clarify how the GT labels in Table-I were generated? Were they produced using your proposed method or copied from their original dataset?
Could you provide more details on how the F1 and SRCC values over ranking orders were calculated?
Thanks a lot! I look forward to your reply!
The text was updated successfully, but these errors were encountered:
Besides, how do you determine which object proposals should be used for predicting ranking? Do you use GT labels for selection, or do you introduce an additional classification head to select salient proposals? Thank you!
Image -> Object Detector -> Boxes (how to select the salient proposals from a list boxes, since detector may output more or less object boxes than the real GT salient proposals).
Hello, thank you for your attention in advanced!
I have a few questions regarding the quantitative comparison with state-of-the-art (SOTA) salient object ranking methods, such as ASSR and ILRSR, as presented in Table-I (page 10):
You mentioned that all compared methods are retrained to ensure a fair comparison. Are the training and test splits the same for all three methods?
You indicated that you selected the model with (\gamma=0.2). This (\gamma) value should be related to the ground truth (GT) generation. Could you please clarify how the GT labels in Table-I were generated? Were they produced using your proposed method or copied from their original dataset?
Could you provide more details on how the F1 and SRCC values over ranking orders were calculated?
Thanks a lot! I look forward to your reply!
The text was updated successfully, but these errors were encountered: