-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why do you transform confidence through affinity instead of using confidence directly? #16
Comments
Hello @Erik-Y, The variable aff is the affinity values for non-local neighbors with sub-pixel offsets. Thus, we need to calculate each non-local neighbor's confidence, which should be also sampled from the non-local neighbor's offset location in the confidence map. |
hi, i am also curious about this part.
why the offset for confidence is not same with the offset for propagation process??? |
Hi, @zzangjinsun In my opinion, for every location (x,y) in (H,W), original confidence and original aff have corresponding values. Thus, its more reasonable to use the original confidence in confidence-incorporated affinity normalization. Can you give me more specific theories or some experiment results to help me? Thanks a lot. |
@zzangjinsun |
Hi, I'm very curious why do you transform confidence through affinity instead of using confidence directly in the following code.
------from nlspnmodel.py 115 ------
if self.args.conf_prop:
list_conf = []
offset_each = torch.chunk(offset, self.num + 1, dim=1)
The text was updated successfully, but these errors were encountered: