Skip to content
/ MAL Public

Multiple Anchor Learning for Visual Object Detection (CVPR,2020)

License

Notifications You must be signed in to change notification settings

DeLightCMU/MAL

Repository files navigation

Multiple Anchor Learning (MAL)

This is the official implementation of the paper:

Wei Ke, Tianliang Zhang, Zeyi Huang, Qixiang Ye, Jianzhuang Liu and Dong Huang, Multiple Anchor Learning for Visual Object Detection, CVPR, 2020, PDF

PWC

Citation:

@inproceedings{kehuang2020,
  title={Multiple Anchor Learning for Visual Object Detection},
  author={Wei Ke and Tianliang Zhang and Zeyi Huang and Qixiang Ye and Jianzhuang Liu and Dong Huang},
  booktitle={CVPR},
  year={2020}
}

This repo includes the basic training and inference pipeline based on maskrcnn_benckmark .

For MAL fast inference, please direct to MAL-inference.

For MAL detection&tracking extention, please direct to MAL-inference-deepsort.

1. Installation

Requirements:

  • Python3
  • PyTorch 1.1 with CUDA support
  • torchvision 0.2.1
  • pycocotools
  • yacs
  • matplotlib
  • GCC >= 4.9
  • (optional) OpenCV for the webcam demo

Step-by-step installation

# first, make sure that your conda is setup properly with the right environment
# for that, check that `which conda`, `which pip` and `which python` points to the
# right path. From a clean conda env, this is what you need to do

conda create --name free_anchor python=3.7
conda activate free_anchor

# this installs the right pip and dependencies for the fresh python
conda install ipython

# maskrnn_benchmark and coco api dependencies
pip install ninja yacs cython matplotlib tqdm

# pytorch and torchvision
# we give the instructions for CUDA 10.0
conda install pytorch=1.1 torchvision=0.2.1 cudatoolkit=10.0 -c pytorch

# install pycocotools
pip install pycocotools

# install MAL
git clone https://github.com/DeLightCMU/MAL.git
git checkout finetune

# the following will install the lib with
# symbolic links, so that you can modify
# the files if you want and won't need to
# re-build it
cd MAL
bash build_maskrcnn.sh

2. Running

For that, all you need to do is to modify maskrcnn_benchmark/config/paths_catalog.py to point to the location where your dataset is stored.

Downloading Pre-trained COCO models

We provide the following MAL models pre-trained on COCO2017.

Config File Backbone test-dev mAP (single-scale) pth models
configs/mal_R-50-FPN ResNet-50-FPN 39.2 download
configs/mal_R-101-FPN ResNet-101-FPN 43.6 download
configs/mal_X-101-FPN ResNext-101-FPN 45.9 download

Fine-tuning from COCO models

cd MAL
CUDA_VISIBLE_DEVICES=1,2,3,4 python -m torch.distributed.launch --nproc_per_node=4 tools/train_net.py --config-file ./configs/MAL_R-50-FPN_e2e.yaml SOLVER.IMS_PER_BATCH 4 MODEL.WEIGHT path_to_pretrained_model

Generating COCO format labels

cd MAL/tools
python transfer_to_coco_json_visdrone.py

About

Multiple Anchor Learning for Visual Object Detection (CVPR,2020)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published