Skip to content

Latest commit

 

History

History
44 lines (37 loc) · 4.58 KB

README.md

File metadata and controls

44 lines (37 loc) · 4.58 KB

By Xinjiang Wang*, Shilong Zhang*, Zhuoran Yu, Litong Feng, Wei Zhang.

Introduction

Feature pyramid has been an efficient method to extract features at different scales. Development over this method mainly focuses on aggregating contextual information at different levels while seldom touching the inter-level corre- lation in the feature pyramid. Early computer vision meth- ods extracted scale-invariant features by locating the fea- ture extrema in both spatial and scale dimension. Inspired by this, a convolution across the pyramid level is proposed in this study, which is termed pyramid convolution and is a modified 3-D convolution. Stacked pyramid convolutions directly extract 3-D (scale and spatial) features and outper- forms other meticulously designed feature fusion modules. Based on the viewpoint of 3-D convolution, an integrated batch normalization that collects statistics from the whole feature pyramid is naturally inserted after the pyramid con- volution. Furthermore, we also show that the naive pyramid convolution, together with the design of RetinaNet head, actually best applies for extracting features from a Gaus- sian pyramid, whose properties can hardly be satisfied by a feature pyramid. In order to alleviate this discrepancy, we build a scale-equalizing pyramid convolution (SEPC) that aligns the shared pyramid convolution kernel only at high- level feature maps. Being computationally efficient and compatible with the head design of most single-stage object detectors, the SEPC module brings significant performance improvement (> 4AP increase on MS-COCO2017 dataset) in state-of-the-art one-stage object detectors, and a light version of SEPC also has ∼ 3.5AP gain with only around 7% inference time increase. The pyramid convolution also functions well as a stand-alone module in two-stage object detectors and is able to improve the performance by ∼ 2AP.

Get Started

You need to install mmdetection (version1.1.0 with mmcv 0.4.3) firstly. All our self-defined modules are in sepc directory, and it has same folder organization as mmdetecion. You can start your experiments with our modified train.py in sepc/tools or inference our model with test.py in sepc/tools. More guidance can be found from mmdeteion.

Models

The results on COCO 2017 val is shown in the below table.

Method Backbone Add modules Lr schd box AP Download
FreeAnchor R-50-FPN Pconv 1x 39.7 model
FreeAnchor R-50-FPN IBN+Pconv 1x 41.0 model
FreeAnchor R-50-FPN SEPC-lite 1x 41.9 model
FreeAnchor R-50-FPN SEPC 1x 43.0 model
Fsaf R-50-FPN baseline 1x 36.8 model
Fsaf R-50-FPN Pconv 1x 38.6 model
Fsaf R-50-FPN IBN+Pconv 1x 39.1 model
Fsaf R-50-FPN SEPC-lite 1x 40.5 model
Fsaf R-50-FPN SEPC 1x 41.6 model
Retinanet R-50-FPN Pconv 1x 37.0 model
Retinanet R-50-FPN IBN+Pconv 1x 37.8 model
Retinanet R-50-FPN SEPC-lite 1x 38.5 model
Retinanet R-50-FPN SEPC 1x 39.6 model

Citations

Please cite our paper in your publications if it helps your research:

@InProceedings{Wang_2020_CVPR,
author = {Wang, Xinjiang and Zhang, Shilong and Yu, Zhuoran and Feng, Litong and Zhang, Wayne},
title = {Scale-Equalizing Pyramid Convolution for Object Detection},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}