-
Notifications
You must be signed in to change notification settings - Fork 629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
R2D2 Feature extractor #46
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR! I added a few minor comments. Please make sure that the code respects the PEP8 style guidelines, for example by running flake8
.
hloc/extractors/r2d2.py
Outdated
RGB_mean = torch.cuda.FloatTensor([0.485, 0.456, 0.406]) | ||
RGB_std = torch.cuda.FloatTensor([0.229, 0.224, 0.225]) | ||
norm_RGB = tvf.Normalize(mean=RGB_mean, std=RGB_std) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- This creates tensors on GPU even if this module is merely imported.
- hloc should not break on GPU-only machines.
I suggest to:
- create
mean
andstd
in_init
astorch.tensor
(CPU) and register them withregister_buffer
. This way, they will be automatically moved to GPU with the model - call
tvf.functional.normalize
in_forward
(mean
andstd
will already be on the correct device)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pass the mean and std as list as required in this function(https://pytorch.org/docs/stable/_modules/torchvision/transforms/functional.html#normalize) and let the function itself to move the tensors to the suitable device. Is that ok?
hloc/extractors/r2d2.py
Outdated
'top-k': 5000, | ||
|
||
'scale-f': 2**0.25, | ||
'min-size': 256, | ||
'max-size': 1024, | ||
'min-scale': 0, | ||
'max-scale': 1, | ||
|
||
'reliability-thr': 0.7, | ||
'repetability-thr': 0.7, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I usually use underscored instead of dashes for config entries
- Could we replace
top-k
withmax_keypoints
to be consistent withsuperpoint.py
scale-f
is not very explicit, is itscale_factor
?thr
-->threshold
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These variable names are set to be consistent with the original r2d2 repo. But I believe your suggestion of making it consistent with hloc is more reasonable.
hloc/extractors/r2d2.py
Outdated
|
||
def _forward(self, data): | ||
img = data['image'] | ||
img = norm_RGB(img[0])[None] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tvf normalization supports batching
hloc/extractors/r2d2.py
Outdated
self.detector = NonMaxSuppression(rel_thr=conf['reliability-thr'], | ||
rep_thr=conf['repetability-thr']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not seem to respect PEP8
…tore the tvf.Normalize function norm_rgb as a member function in the class. The device will be take care in tvf.functional.Normalize
… keypoints/descriptors are the same). Successfully run on CPU but the results seem not to match very well
@lxxue How did you handle the descriptor dimension when using superglue for matching? |
@Matstah Sorry it doesn't support matching with superglue as they are trained on different features of different dimensions. SuperGlue is descriptor-specific so you indeed need to retrain for each new descriptor. |
Thanks a lot for the changes! |
No description provided.