-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add data-aware anchor generator #1251
Conversation
435cc1c
to
7706d68
Compare
9336706
to
9db1680
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. I left some minor comments.
scale_samples, ratio_samples = [], [] | ||
for target in targets[gt_max_overlaps > 0.1]: | ||
h = target[3] - target[1] | ||
w = target[2] - target[0] | ||
r = h / w | ||
affine_h = base_size * torch.sqrt(r) | ||
affine_w = base_size / torch.sqrt(r) | ||
s = max(h / affine_h, w / affine_w) | ||
|
||
scale_samples.append(s) | ||
ratio_samples.append(r) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scale_samples, ratio_samples = [], [] | |
for target in targets[gt_max_overlaps > 0.1]: | |
h = target[3] - target[1] | |
w = target[2] - target[0] | |
r = h / w | |
affine_h = base_size * torch.sqrt(r) | |
affine_w = base_size / torch.sqrt(r) | |
s = max(h / affine_h, w / affine_w) | |
scale_samples.append(s) | |
ratio_samples.append(r) | |
scale_samples = [max(h / (base_size * torch.sqrt(r)), w / (base_size / torch.sqrt(r))) for h, w, r in zip(targets[:, 3] - targets[:, 1], targets[:, 2] - targets[:, 0], targets[:, 3] / targets[:, 2])] | |
ratio_samples = [h / w for h, w in zip(targets[:, 3] - targets[:, 1], targets[:, 2] - targets[:, 0])] |
I think there are some repeated computation.
scales = torch.Tensor(scales).to(self.device).detach().requires_grad_(True) | ||
ratios = torch.Tensor(ratios).to(self.device).detach().requires_grad_(True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scales = torch.Tensor(scales).to(self.device).detach().requires_grad_(True) | |
ratios = torch.Tensor(ratios).to(self.device).detach().requires_grad_(True) | |
scales = scales.detach().requires_grad_(True) | |
ratios = ratios.detach().requires_grad_(True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sure there's a way to pre-check for errors that isn't try except, but I can't think of it. If there's a better structure, I'd like to use it instead of try except.
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## develop #1251 +/- ##
===========================================
+ Coverage 80.53% 80.57% +0.04%
===========================================
Files 270 271 +1
Lines 30248 30441 +193
Branches 5907 5930 +23
===========================================
+ Hits 24360 24528 +168
- Misses 4507 4525 +18
- Partials 1381 1388 +7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Summary
How to test
Checklist
License
Feel free to contact the maintainers if that's a concern.