This is the official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection". (DINO pronounced `daɪnoʊ' as in dinosaur)
Authors: Hao Zhang*, Feng Li*, Shilong Liu*, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, Heung-Yeung Shum
[2022/7/14]: We released the code with Swin-L and Convnext backbone.
[2022/7/10]: We released the code and checkpoints with Resnet-50 backbone.
[2022/6/7]: We release a unified detection and segmentation model Mask DINO that achieves the best results on all the three segmentation tasks (54.7 AP on COCO instance leaderboard, 59.5 PQ on COCO panoptic leaderboard, and 60.8 mIoU on ADE20K semantic leaderboard)! Code will be available here.
[2022/5/28] Code for DN-DETR is available here.
[2020/4/10]: Code for DAB-DETR is avaliable here.
[2022/3/8]: We reach the SOTA on MS-COCO leader board with 63.3AP!
[2022/3/9]: We build a repo Awesome Detection Transformer to present papers about transformer for detection and segmenttion. Welcome to your attention!
We present DINO (DETR with Improved deNoising anchOr boxes) with:
- State-of-the-art & end-to-end: DINO achieves 63.2 AP on COCO Val and 63.3 AP on COCO test-dev with more than ten times smaller model size and data size than previous best models.
- Fast-converging: With the ResNet-50 backbone, DINO with 5 scales achieves 49.4 AP in 12 epochs and 51.3 AP in 24 epochs. Our 4-scale model achieves similar performance and runs at 23 FPS.
We have put our model checkpoints here [model zoo in Google Drive][model zoo in 百度网盘](提取码"DINO"), where checkpoint{x}_{y}scale.pth denotes the checkpoint of y-scale model trained for x epochs.
name | backbone | box AP | Checkpoint | Where in Our Paper | |
---|---|---|---|---|---|
1 | DINO-4scale | R50 | 49.0 | Google Drive / BaiDu | Table 1 |
2 | DINO-5scale | R50 | 49.4 | Google Drive / BaiDu | Table 1 |
name | backbone | box AP | Checkpoint | Where in Our Paper | |
---|---|---|---|---|---|
1 | DINO-4scale | R50 | 50.4 | Google Drive / BaiDu | Table 2 |
2 | DINO-5scale | R50 | 51.3 | Google Drive / BaiDu | Table 2 |
name | backbone | box AP | Checkpoint | Where in Our Paper | |
---|---|---|---|---|---|
1 | DINO-4scale | R50 | 50.9 | Google Drive / BaiDu | Table 2 |
2 | DINO-5scale | R50 | 51.2 | Google Drive / BaiDu | Table 2 |
We use the environment same to DAB-DETR and DN-DETR to run DINO. If you have run DN-DETR or DAB-DETR, you can skip this step.
We test our models under python=3.7.3,pytorch=1.9.0,cuda=11.1
. Other versions might be available as well.
- Clone this repo
git https://github.com/IDEACVR/DINO
cd DINO
- Install Pytorch and torchvision
Follow the instruction on https://pytorch.org/get-started/locally/.
# an example:
conda install -c pytorch pytorch torchvision
- Install other needed packages
pip install -r requirements.txt
- Compiling CUDA operators
cd models/dino/ops
python setup.py build install
# unit test (should see all checking is True)
python test.py
cd ../../..
Please download COCO 2017 dataset and organize them as following:
COCODIR/
├── train2017/
├── val2017/
└── annotations/
├── instances_train2017.json
└── instances_val2017.json
We use DINO 4-scale model trained for 12 epochs as an example to demonstrate how to evaluate and train our model.
Download our DINO model checkpoint "checkpoint0011_4scale.pth" from this link and perform the command below. You can expect to get the final AP about 49.0.
bash scripts/DINO_eval.sh /path/to/your/COCODIR /path/to/your/checkpoint
For inference and visualizations, we provide a notebook as an example.
You can also train our model on a single process:
bash scripts/DINO_train.sh /path/to/your/COCODIR
However, as the training is time consuming, we suggest to train the model on multi-device.
If you plan to train the models on a cluster with Slurm, here is an example command for training:
# for DINO-4scale: 49.0
bash scripts/DINO_train_submitit.sh /path/to/your/COCODIR
# for DINO-5scale: 49.4
bash scripts/DINO_train_submitit_5scale.sh /path/to/your/COCODIR
Notes: The results are sensitive to the batch size. We use 16(2 images each GPU x 8 GPUs for DINO-4scale and 1 images each GPU x 16 GPUs for DINO-5scale) by default.
Or run with multi-processes on a single node:
# for DINO-4scale: 49.0
bash scripts/DINO_train_dist.sh /path/to/your/COCODIR
Our model is based on DAB-DETR and DN-DETR.
DN-DETR: Accelerate DETR Training by Introducing Query DeNoising.
Feng Li*, Hao Zhang*, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2022.
[paper] [code] [中文解读]
DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR.
Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang.
International Conference on Learning Representations (ICLR) 2022.
[paper] [code]
We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, etc. More related work are available at Awesome Detection Transformer.
DINO is released under the Apache 2.0 license. Please see the LICENSE file for more information.
Copyright (c) IDEA. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
If you find our work helpful for your research, please consider citing the following BibTeX entry.
@misc{zhang2022dino,
title={DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection},
author={Hao Zhang and Feng Li and Shilong Liu and Lei Zhang and Hang Su and Jun Zhu and Lionel M. Ni and Heung-Yeung Shum},
year={2022},
eprint={2203.03605},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{li2022dn,
title={Dn-detr: Accelerate detr training by introducing query denoising},
author={Li, Feng and Zhang, Hao and Liu, Shilong and Guo, Jian and Ni, Lionel M and Zhang, Lei},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13619--13627},
year={2022}
}
@inproceedings{
liu2022dabdetr,
title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}},
author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=oMI9PjOb9Jl}
}