This repository is an official implementation of the Anchor DETR. We encode the anchor points as the object queries in DETR. Multiple patterns are attached to each anchor point to solve the difficulty: "one region, multiple objects". We also propose an attention variant RCDA to reduce the memory cost for high-resolution features.
feature | epochs | AP | GFLOPs | Infer Speed (FPS) | |
---|---|---|---|---|---|
DETR | DC5 | 500 | 43.3 | 187 | 10 (12) |
SMCA | multi-level | 50 | 43.7 | 152 | 10 |
Deformable DETR | multi-level | 50 | 43.8 | 173 | 15 |
Conditional DETR | DC5 | 50 | 43.8 | 195 | 10 |
Anchor DETR | DC5 | 50 | 44.3 | 172 | 16 (19) |
Note:
- The results are based on ResNet-50 backbone.
- Inference speeds are measured on NVIDIA Tesla V100 GPU.
- The values in parentheses of the Infer Speed indicate the speed with torchscript optimization.
name | backbone | AP | URL |
---|---|---|---|
AnchorDETR-C5 | R50 | 42.1 | model / log |
AnchorDETR-DC5 | R50 | 44.3 | model / log |
AnchorDETR-C5 | R101 | 43.5 | model / log |
AnchorDETR-DC5 | R101 | 45.1 | model / log |
Note: the models and logs are also available at Baidu Netdisk with code hh13
.
First, clone the repository locally:
git clone https://github.com/megvii-research/AnchorDETR.git
Then, install dependencies:
pip install -r requirements.txt
To train AnchorDETR on a single node with 8 GPUs:
python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path /path/to/coco
To evaluate AnchorDETR on a single node with 8 GPUs:
python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --eval --coco_path /path/to/coco --resume /path/to/checkpoint.pth
To evaluate AnchorDETR with a single GPU:
python main.py --eval --coco_path /path/to/coco --resume /path/to/checkpoint.pth
If you find this project useful for your research, please consider citing the paper.
@inproceedings{wang2022anchor,
title={Anchor detr: Query design for transformer-based detector},
author={Wang, Yingming and Zhang, Xiangyu and Yang, Tong and Sun, Jian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={36},
number={3},
pages={2567--2575},
year={2022}
}
If you have any questions, feel free to open an issue or contact us at wangyingming@megvii.com.