You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can't understand the code above, in the sampler.py, to compute the pscores.
2、
Does the mask get from the dataloader has been used? I found that the mask which the loss using is from the sampler.py.
I trid to pop the 'mask' from inputs of net, it still work.
Is there someone can explain it for me?
Thx a lot!
The text was updated successfully, but these errors were encountered:
Well, it is quite complicated for sure. This is essentially computing the dot-product between features from the first image and the second image, given the pixel alignment in xy2p. You first need to get familiar with advanced slicing techniques, see this link for instance.
About the mask. What actually happens is that invalid pixels (area where the optical flow is undefined) are set to nan according to the mask:
aflow[~mask.view(bool)] = np.nan # mask bad pixels!
So the mask is superfluous after that, since the validity is directly encoded into the optical flow matrix. Note that if there is no mask, the dataloader will assume that all pixels are valid (see this line).
1、
pscores = (feat1[None,:,:] * feat2[b2, :, xy2p[1], xy2p[0]]).sum(dim=-1).t()
I can't understand the code above, in the sampler.py, to compute the pscores.
2、
Does the mask get from the dataloader has been used? I found that the mask which the loss using is from the sampler.py.
I trid to pop the 'mask' from inputs of net, it still work.
Is there someone can explain it for me?
Thx a lot!
The text was updated successfully, but these errors were encountered: