Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some error in your shadow detection results at test datasets #15

Open
Jingwei-Liao opened this issue Oct 20, 2020 · 4 comments
Open

Some error in your shadow detection results at test datasets #15

Jingwei-Liao opened this issue Oct 20, 2020 · 4 comments

Comments

@Jingwei-Liao
Copy link

hi bro,we think you may upload wrong result on SUB, because we find the SUB result is the same as the SBU_crf result

@Jingwei-Liao
Copy link
Author

And can you upload your pretrained shadow detection model, we need it to compare with my method on our own dataset

@eraserNut
Copy link
Owner

Q1: We generate the outputs again and get the similar results. The reason of the comparability of SBU and SBU_crf maybe lead by that we use the binarization operation (prediction = (prediction>90)*255) before CRF in SBU dataset. Because at that time, we observed that many pixels are positive but still under 127.5, we adjust it by binarization operation with bias. However, after the submission, we find that we can use the weighted BCE loss to make the balance this problem.
Q2: Thanks for you advise, We will upload our pretrained model soon.

@guanhuankang
Copy link

Q1: We generate the outputs again and get the similar results. The reason of the comparability of SBU and SBU_crf maybe lead by that we use the binarization operation (prediction = (prediction>90)*255) before CRF in SBU dataset. Because at that time, we observed that many pixels are positive but still under 127.5, we adjust it by binarization operation with bias. However, after the submission, we find that we can use the weighted BCE loss to make the balance this problem.
Q2: Thanks for you advise, We will upload our pretrained model soon.

hello, I wonder that whether you select some different thresholds on different dataset. For example, may you select (prediction>90) in SBU, and (prediction>x) in ISTD(x!=90)? Thanks!

@eraserNut
Copy link
Owner

Actually, we just do this binary operation in SBU. For UCF and ISTD, we save the soft output before crf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants