Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

R2D2 Feature extractor #46

Merged
merged 7 commits into from
Jul 12, 2021
Merged

R2D2 Feature extractor #46

merged 7 commits into from
Jul 12, 2021

Conversation

lxxue
Copy link
Contributor

@lxxue lxxue commented Feb 5, 2021

No description provided.

Copy link
Member

@sarlinpe sarlinpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the PR! I added a few minor comments. Please make sure that the code respects the PEP8 style guidelines, for example by running flake8.

Comment on lines 12 to 14
RGB_mean = torch.cuda.FloatTensor([0.485, 0.456, 0.406])
RGB_std = torch.cuda.FloatTensor([0.229, 0.224, 0.225])
norm_RGB = tvf.Normalize(mean=RGB_mean, std=RGB_std)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This creates tensors on GPU even if this module is merely imported.
  2. hloc should not break on GPU-only machines.

I suggest to:

  • create mean and std in _init as torch.tensor (CPU) and register them with register_buffer. This way, they will be automatically moved to GPU with the model
  • call tvf.functional.normalize in _forward (mean and std will already be on the correct device)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines 19 to 28
'top-k': 5000,

'scale-f': 2**0.25,
'min-size': 256,
'max-size': 1024,
'min-scale': 0,
'max-scale': 1,

'reliability-thr': 0.7,
'repetability-thr': 0.7,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I usually use underscored instead of dashes for config entries
  2. Could we replace top-k with max_keypoints to be consistent with superpoint.py
  3. scale-f is not very explicit, is it scale_factor?
  4. thr --> threshold

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These variable names are set to be consistent with the original r2d2 repo. But I believe your suggestion of making it consistent with hloc is more reasonable.


def _forward(self, data):
img = data['image']
img = norm_RGB(img[0])[None]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tvf normalization supports batching

Comment on lines 35 to 36
self.detector = NonMaxSuppression(rel_thr=conf['reliability-thr'],
rep_thr=conf['repetability-thr'])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not seem to respect PEP8

…tore the tvf.Normalize function norm_rgb as a member function in the class. The device will be take care in tvf.functional.Normalize
… keypoints/descriptors are the same). Successfully run on CPU but the results seem not to match very well
@Matstah
Copy link
Contributor

Matstah commented May 18, 2021

@lxxue How did you handle the descriptor dimension when using superglue for matching?

@lxxue
Copy link
Contributor Author

lxxue commented May 19, 2021

@Matstah Sorry it doesn't support matching with superglue as they are trained on different features of different dimensions. SuperGlue is descriptor-specific so you indeed need to retrain for each new descriptor.

@sarlinpe
Copy link
Member

Thanks a lot for the changes!

@sarlinpe sarlinpe merged commit 715f5de into cvg:dev Jul 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants