Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support TTA of RetinaNet and GFL #3638

Merged
merged 6 commits into from
Sep 24, 2020
Merged

Conversation

shinya7y
Copy link
Contributor

This PR partially responds to the demand for TTA of single-stage detectors.
#3633 #2931 (comment) #509

RetinaNet (model)

0.365 0.554 0.391 0.204 0.403 0.481  # img_scale=(1333, 800),
0.364 0.558 0.387 0.234 0.403 0.448  # img_scale=(1600, 960),
0.372 0.563 0.397 0.232 0.408 0.475  # img_scale=[(1333, 800), (1600, 960)],
0.367 0.556 0.393 0.208 0.406 0.483  # img_scale=(1333, 800), flip=True,

GFL (model)

0.402 0.584 0.433 0.233 0.440 0.522  # img_scale=(1333, 800),
0.402 0.587 0.434 0.253 0.443 0.491  # img_scale=(1600, 960),
0.412 0.594 0.446 0.253 0.452 0.516  # img_scale=[(1333, 800), (1600, 960)],
0.406 0.588 0.440 0.241 0.446 0.523  # img_scale=(1333, 800), flip=True,

Disclaimer: Unless otherwise stated, my past, present, and future Contributions have nothing to do with my employer.

@codecov
Copy link

codecov bot commented Aug 27, 2020

Codecov Report

Merging #3638 into master will increase coverage by 0.10%.
The diff coverage is 23.80%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3638      +/-   ##
==========================================
+ Coverage   60.93%   61.04%   +0.10%     
==========================================
  Files         217      218       +1     
  Lines       15374    15394      +20     
  Branches     2628     2633       +5     
==========================================
+ Hits         9368     9397      +29     
+ Misses       5537     5525      -12     
- Partials      469      472       +3     
Flag Coverage Δ
#unittests 61.04% <23.80%> (+0.10%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmdet/models/dense_heads/gfl_head.py 27.06% <0.00%> (-0.26%) ⬇️
mmdet/models/dense_heads/reppoints_head.py 22.34% <0.00%> (ø)
mmdet/models/detectors/reppoints_detector.py 100.00% <ø> (+75.00%) ⬆️
mmdet/models/detectors/single_stage.py 84.61% <0.00%> (-3.39%) ⬇️
mmdet/models/dense_heads/dense_test_mixins.py 15.00% <15.00%> (ø)
mmdet/models/dense_heads/anchor_head.py 84.97% <54.54%> (-1.92%) ⬇️
mmdet/models/dense_heads/anchor_free_head.py 75.78% <75.00%> (-0.22%) ⬇️
mmdet/models/roi_heads/mask_heads/maskiou_head.py 97.87% <0.00%> (+5.31%) ⬆️
mmdet/models/roi_heads/mask_scoring_roi_head.py 91.07% <0.00%> (+32.14%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 11b1ef8...4a6149f. Read the comment docs.

@ZwwWayne ZwwWayne requested a review from xvjiarui August 30, 2020 08:09
@shinya7y
Copy link
Contributor Author

shinya7y commented Sep 8, 2020

Hi @xvjiarui,
I would be grateful if you could review this PR.

@xvjiarui
Copy link
Collaborator

xvjiarui commented Sep 9, 2020

Hi @shinya7y
Sorry for the late reply.
I will review it by today.

Comment on lines +575 to +586
if with_nms:
# some heads don't support with_nms argument
proposals = self._get_bboxes_single(cls_score_list,
bbox_pred_list,
mlvl_anchors, img_shape,
scale_factor, cfg, rescale)
else:
proposals = self._get_bboxes_single(cls_score_list,
bbox_pred_list,
mlvl_anchors, img_shape,
scale_factor, cfg, rescale,
with_nms)
Copy link
Collaborator

@xvjiarui xvjiarui Sep 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we just pass with_nms into self._get_bboxes_single, will any detector raise an error?
I checked all heads inherited AnchorHead, it looks fine except for ATSSHead.
We may support it in ATSSHead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ATSSHead will not raise an error, since it has own get_bboxes.
But RPNHead will raise an error.
The code above also avoids breaking compatibility with mmdetection forks and other PRs.
Needless to say, the code should be cleaned after the with_nms argument becomes standard.

@xvjiarui
Copy link
Collaborator

xvjiarui commented Sep 9, 2020

Except for the comments, the code looks good to me.

@shinya7y
Copy link
Contributor Author

In the updated codes, more detectors and dense_heads have aug_test.
Instead, I added two asserts for unsupported detectors and heads.



@HEADS.register_module()
class AnchorHead(BaseDenseHead):
class AnchorHead(BaseDenseHead, BBoxTestMixin):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RPNHead has a mixin RPNTestMixin, can these two mixins somehow be merged to avoid confusion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simple merge of dense_test_mixins.py and rpn_test_mixin.py causes another confusion, because dense_test_mixins.py focuses on TTA, and doesn't have simple_test.

This issue comes from the following inconsistencies.

  • roi_heads have simple_test
  • RPNHead and GARPNHead have simple_test_rpn
  • dense_heads don't have simple_test

I think the inconsistencies should be addressed by a PR for refactoring, not by this PR for adding a feature.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. A later refactoring will be proposed.

@hellock
Copy link
Member

hellock commented Sep 23, 2020

Task linked: CU-4fpu0b Single Stage TTA

@hellock hellock merged commit 9c95543 into open-mmlab:master Sep 24, 2020
@yeliudev
Copy link
Contributor

yeliudev commented Oct 7, 2020

Hi @shinya7y! Thanks for your contribution. I've tested your PR and I found that after commit 5fa2323, the performance of TTA significantly dropped in my case. Is there any inconsistency before and after this commit?

@shinya7y
Copy link
Contributor Author

shinya7y commented Oct 7, 2020

Hi @yeliudev! Could you please provide more information about "the performance of TTA"?
AP or speed? RetinaNet or GFL?

@yeliudev
Copy link
Contributor

yeliudev commented Oct 7, 2020

Hi @yeliudev! Could you please provide more information about "the performance of TTA"?
AP or speed? RetinaNet or GFL?

Thanks for your reply! The 'performance' I mentioned means AP, and I'm using RetinaNet. I notice that the drop of the AP might not be caused by this PR itself. I'm doing more tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants