Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Config about dense_head(One_Stage) + RoI_head(Two_Stage) #5054

Closed
zhongqiu1245 opened this issue Apr 24, 2021 · 11 comments
Closed

Config about dense_head(One_Stage) + RoI_head(Two_Stage) #5054

zhongqiu1245 opened this issue Apr 24, 2021 · 11 comments
Assignees

Comments

@zhongqiu1245
Copy link

zhongqiu1245 commented Apr 24, 2021

Hi dear authors:
Sorry to bother you.
If we adopt One-Stage's method of judging positive and negative samples and sampling methods and add Two-Stage's RoI_Head, will we get a more reasonable structure and more powerful Two-Stage effect? (In other words, it uses the positive and negative sample determination method and sampling method of One-Stage dense_head to replace the positive and negative sample determination method and sampling method of Two-Stage RPN)
I mean this:
dense_head(One_Stage) + RoI_head(Two_Stage)
After all, it can have both the characteristics of One-stage and Two-stage Or mAP advantages, such as using autoassign+cascade_rcnn, FCOS + cascade_rcnn(the original paper of FCOS proves the feasibility of FCOS as RPN), etc.
Can you kindly post a config file for this model?
Finally, please forgive me for my bad English.
Thank you!

@shinya7y
Copy link
Contributor

Considering the CenterNet2 paper (https://arxiv.org/abs/2103.07461), this feature will be useful.
I sent a PR #5061 , where most of dense heads have methods for RPN (simple_test_rpn, aug_test_rpn).
It will be a step toward supporting the feature.

@zhongqiu1245
Copy link
Author

zhongqiu1245 commented Apr 26, 2021

Hi @shinya7y
Thank you for your PR, your job is really amazing.
Could you provide some configs to demonstrate how to use it? (such as autoassign +cascadercnn, fcos+cascadercnn, retinanet + maskrcnn?)
It would be great if it could be provided.
Thank you!

@shinya7y
Copy link
Contributor

Since more code updates are needed to support the feature, just changing configs doesn't work.
If supported, configs will be like below.

gfl_faster_rcnn_r50_fpn_1x_coco.py (example for GFL + Faster R-CNN)

_base_ = [
    '../_base_/models/faster_rcnn_r50_fpn.py',  # base two-stage config
    '../_base_/datasets/coco_detection.py',
    '../_base_/schedules/schedule_1x.py',
    '../_base_/default_runtime.py'
]

model = dict(
    rpn_head=dict(  # copy from bbox_head of single-stage config
        _delete_=True,  # ignore settings of RPNHead in base config
        type='GFLHead',  # TODO update for RPN
        num_classes=1,  # foreground class only
        in_channels=256,
        stacked_convs=4,
        feat_channels=256,
        anchor_generator=dict(
            type='AnchorGenerator',
            ratios=[1.0],
            octave_base_scale=8,
            scales_per_octave=1,
            strides=[8, 16, 32, 64, 128]),
        loss_cls=dict(
            type='QualityFocalLoss',
            use_sigmoid=True,
            beta=2.0,
            loss_weight=1.0),
        loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25),
        reg_max=16,
        loss_bbox=dict(type='GIoULoss', loss_weight=2.0)),
    train_cfg=dict(
        # GFL uses ATSSAssigner
        rpn=dict(assigner=dict(_delete_=True, type='ATSSAssigner', topk=9)),
        rpn_proposal=dict(score_thr=-1.0)),
    test_cfg=dict(rpn=dict(score_thr=-1.0)))

# disable NumClassCheckHook
custom_hooks = None
# TODO update
# https://github.com/open-mmlab/mmdetection/blob/808472f2574dcb52a4b5ec819a2c8dfab1af4356/mmdet/datasets/utils.py#L139-L140

For now, we need to implement a new head GFLRPNHead referring to GFLHead and RPNHead.

@zhongqiu1245
Copy link
Author

Thank you!

@zhaoxin111
Copy link
Contributor

zhaoxin111 commented Jun 24, 2021

Has anyone reproduced the strategy mentioned in CenterNet2? I used mmdet to implement retinanet-faster-rcnn, mAP is close to the original faster rcnn.

@LMerCy
Copy link

LMerCy commented Aug 6, 2021

@shinya7y Actually my ap is so slow use such config, how about your results?(i map all the class to 0 for simplify)

@LMerCy
Copy link

LMerCy commented Aug 6, 2021

@shinya7y Actually my ap is so slow use such config, how about your results?(i map all the class to 0 for simplify)

@zhaoxin111 Is your config same to this?[change rcnn pos thr to 0.6, rpn nms thr to 0.7]

@zhaoxin111
Copy link
Contributor

@shinya7y Actually my ap is so slow use such config, how about your results?(i map all the class to 0 for simplify)

@zhaoxin111 Is your config same to this?[change rcnn pos thr to 0.6, rpn nms thr to 0.7]

The mAP of faster-rcnn-retinanet is close to the paper, but the centernet* in my experiment is lower 6 point than original code.

@LMerCy
Copy link

LMerCy commented Aug 9, 2021

@shinya7y Actually my ap is so slow use such config, how about your results?(i map all the class to 0 for simplify)

@zhaoxin111 Is your config same to this?[change rcnn pos thr to 0.6, rpn nms thr to 0.7]

The mAP of faster-rcnn-retinanet is close to the paper, but the centernet* in my experiment is lower 6 point than original code.

Did you use "RandomSampler" for retinanet train_cfg?

@zhaoxin111
Copy link
Contributor

@shinya7y Actually my ap is so slow use such config, how about your results?(i map all the class to 0 for simplify)

@zhaoxin111 Is your config same to this?[change rcnn pos thr to 0.6, rpn nms thr to 0.7]

The mAP of faster-rcnn-retinanet is close to the paper, but the centernet* in my experiment is lower 6 point than original code.

Did you use "RandomSampler" for retinanet train_cfg?

No,because RetinaNet do not need sampler

@LMerCy
Copy link

LMerCy commented Aug 12, 2021

@zhaoxin111 Thanks! i have reproduced the results, i made some mistake in my origin config.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants