This is the official implementation of the paper "DETRs with Hybrid Matching".
Authors: Ding Jia, Yuhui Yuan, Haodi He, Xiaopei Wu, Haojun Yu, Weihong Lin, Lei Sun, Chao Zhang, Han Hu
If you find H-PETR-Pose useful in your research, please consider citing:
@article{jia2022detrs,
title={DETRs with Hybrid Matching},
author={Jia, Ding and Yuan, Yuhui and He, Haodi and Wu, Xiaopei and Yu, Haojun and Lin, Weihong and Sun, Lei and Zhang, Chao and Hu, Han},
journal={arXiv preprint arXiv:2207.13080},
year={2022}
}
@inproceedings{shi2022end,
title={End-to-End Multi-Person Pose Estimation With Transformers},
author={Shi, Dahu and Wei, Xing and Li, Liangqi and Ren, Ye and Tan, Wenming},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={11069--11078},
year={2022}
}
We provide a set of baseline results and trained models available for download:
Name | Backbone | epochs | AP (Reproduced / Reported) | download |
---|---|---|---|---|
Deformable-DETR | R50 | 100 | 69.3 / 68.8 | model |
Deformable-DETR | R101 | 100 | 69.9 / 70.0 | model |
Deformable-DETR | Swin Large | 100 | 73.3 / 73.1 | model |
H-Deformable-DETR | R50 | 100 | 70.9 | model |
H-Deformable-DETR | R101 | 100 | 71.0 | model |
H-Deformable-DETR | Swin Large | 100 | 74.9 | model |
- We use 8 V-100 GPUs and
batch_size = 8
for all experiments. - We tune the
droppath
of Swin Large backbone from0.3
to0.5
for experiments of baseline and our method.
We test our models under python=3.7.10,pytorch=1.10.1,cuda=10.2
. Other versions might be available as well.
Please follow get_started.md to install the repo.
bash ./tools/dist_train.sh <config_path> 8
bash ./tools/dist_test.sh <config_path> <checkpoint_path> 8 --eval keypoints
- opera/models/dense_heads/petr_head.py
- opera/models/dense_heads/__init__.py
- opera/models/utils/transformer.py
- opera/models/utils/__init__.py