Skip to content

Latest commit

 

History

History
131 lines (96 loc) · 25.9 KB

README.md

File metadata and controls

131 lines (96 loc) · 25.9 KB

RF-Next: Efficient Receptive Field Search for CNN

RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks

Abstract

Temporal/spatial receptive fields of models play an important role in sequential/spatial tasks. Large receptive fields facilitate long-term relations, while small receptive fields help to capture the local details. Existing methods construct models with hand-designed receptive fields in layers. Can we effectively search for receptive field combinations to replace hand-designed patterns? To answer this question, we propose to find better receptive field combinations through a global-to-local search scheme. Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combinations further. The global search finds possible coarse combinations other than human-designed patterns. On top of the global search, we propose an expectation-guided iterative local search scheme to refine combinations effectively. Our RF-Next models, plugging receptive field search to various models, boost the performance on many tasks, e.g., temporal action segmentation, object detection, instance segmentation, and speech synthesis. The source code is publicly available on http://mmcheng.net/rfnext.

Results and Models

ConvNext on COCO

Backbone Method RFNext Lr Schd box mAP mask mAP Config Download
ConvNeXt-T Cascade Mask R-CNN NO 3x 50.3 43.6 config model | log
RF-ConvNeXt-T Cascade Mask R-CNN Single-Branch 3x 50.6 44.0 search retrain model | log
RF-ConvNeXt-T Cascade Mask R-CNN Multiple-Branch 3x 50.9 44.3 search retrain model | log

PVTv2 on COCO

Backbone Method RFNext Lr Schd box mAP mask mAP Config Download
PVTv2-b0 Mask R-CNN NO 1x 38.2 36.2 - -
RF-PVTv2-b0 Mask R-CNN Single-Branch 1x 38.9 36.8 search retrain model | log
RF-PVTv2-b0 Mask R-CNN Multiple-Branch 1x 39.3 37.1 search retrain model | log

The results of PVTv2-b0 are from PVT.

Res2Net on COCO

Backbone Method RFNext Lr Schd box mAP mask mAP Config Download
Res2Net-101 Cascade Mask R-CNN NO 20e 46.4 40.0 config model | log
RF-Res2Net-101 Cascade Mask R-CNN Single-Branch 20e 46.9 40.7 search retrain model | log
RF-Res2Net-101 Cascade Mask R-CNN Multiple-Branch 20e 47.9 41.5 search retrain model | log

HRNet on COCO

Backbone Method RFNext Lr Schd box mAP mask mAP Config Download
HRNetV2p-W18 Cascade Mask R-CNN NO 20e 41.6 36.4 config model | log
RF-HRNetV2p-W18 Cascade Mask R-CNN Single-Branch 20e 43.0 37.6 search retrain model | log
RF-HRNetV2p-W18 Cascade Mask R-CNN Multiple-Branch 20e 43.7 38.2 search retrain model | log

Note: the performance of multi-branch models listed above are evaluated during searching to save computional cost, retraining would achieve similar or better performance.

Res2Net on COCO panoptic

Backbone Method RFNext Lr schd PQ SQ RQ Config Download
Res2Net-50 Panoptic FPN NO 1x 42.5 78.0 51.8 config model | log
RF-Res2Net-50 Panoptic FPN Single-Branch 1x 44.0 78.7 53.6 search retrain model | log
RF-Res2Net-50 Panoptic FPN Multiple-Branch 1x 44.3 79.0 53.9 search retrain model | log

Configs

If you want to search receptive fields on an existing model, you need to define a RFSearchHook in the custom_hooks of config file.

custom_hooks = [
    dict(
        type='RFSearchHook',
        mode='search',
        rfstructure_file=None,
        verbose=True,
        by_epoch=True,
        config=dict(
            search=dict(
                step=0,
                max_step=11,
                search_interval=1,
                exp_rate=0.5,
                init_alphas=0.01,
                mmin=1,
                mmax=24,
                num_branches=2,
                skip_layer=[]))
        ),
]

Arguments:

  • max_step: The maximum number of steps to update the structures.
  • search_interval: The interval (epoch) between two updates.
  • exp_rate: The controller of the sparsity of search space. For a conv with an initial dilation rate of D, dilation rates will be sampled with an interval of exp_rate * D.
  • num_branches: The controller of the size of search space (the number of branches). If you set S=3, the dilations are [D - exp_rate * D, D, D + exp_rate * D] for three branches. If you set num_branches=2, the dilations are [D - exp_rate * D, D + exp_rate * D]. With num_branches=2, you can achieve similar performance with less MEMORY and FLOPS.
  • skip_layer: The modules in skip_layer will be ignored during the receptive field search.

Training

1. Searching Jobs

You can launch searching jobs by using config files with prefix rfnext_search. The json files of searched structures will be saved to work_dir.

If you want to further search receptive fields upon a searched structure, please set rfsearch_cfg.rfstructure_file in config file to the corresponding json file.

2. Training Jobs

Setting rfsearch_cfg.rfstructure_file to the searched structure file (.json) and setting rfsearch_cfg.mode to fixed_single_branch or fixed_multi_branch, you can retrain a model with the searched structure. You can launch fixed_single_branch/fixed_multi_branch training jobs by using config files with prefix rfnext_fixed_single_branch or rfnext_fixed_multi_branch.

Note that the models after the searching stage is ready a fixed_multi_branch version, which achieves better performance than fixed_single_branch, without any retraining.

Inference

rfsearch_cfg.rfstructure_file and rfsearch_cfg.mode should be set for inferencing stage.

Note:For the models trained with modes of fixed_single_branch or fixed_multi_branch, you can just use the training config for inferencing. But If you want to inference the models trained with the mode of search, please use the config with prefix of rfnext_fixed_multi_branch to inference the models. (Otherwise, you should set rfsearch_cfg.mode to fixed_multi_branch and set the searched rfstructure_file.)

Citation

@article{gao2022rfnext,
title={RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks},
author={Gao, Shanghua and Li, Zhong-Yu and Han, Qi and Cheng, Ming-Ming and Wang, Liang},
journal=TPAMI,
year={2022}
}

@inproceedings{gao2021global2local,
  title     = {Global2Local: Efficient Structure Search for Video Action Segmentation},
  author    = {Gao, Shanghua and Han, Qi and Li, Zhong-Yu and Peng, Pai and Wang, Liang and Cheng, Ming-Ming},
  booktitle = CVPR,
  year      = {2021}
}