Code for our CVPR 2021 paper Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection
pytorch=0.4.1 torchvision=0.2.1
git clone https://github.com/SherlockHolmes221/GGNet.git
cd GGNet
pip install -r requirements.txt
cd src/lib/models/networks/DCNv2
./make.sh
-
HICO-DET Organize them in
Dataset
folder as follows:|-- Dataset/ | |-- <hico-det>/ | |-- images |-- test2015 |-- train2015 | |-- annotations
The annotations is provided here
-
V-COCO Organize them in
Dataset
folder as follows:|-- Dataset/ | |-- <verbcoco>/ | |-- images |-- val2014 |-- train2014 | |-- annotations
The annotations is provided here
-
Download the pre-trained models trained on COCO object detection dataset provided by CenterNet.Hourglass104). Put them into the
models
folder.
sh experiments/hico/hoidet_hico_hourglass.sh
sh experiments/vcoco/hoidet_vcoco_hourglass.sh
python src/lib/eval/hico_eval_de_ko.py --exp hoidet_hico_ggnet
python src/lib/eval/vcoco_eval.py --exp hoidet_vcoco_ggnet
Our Results on HICO-DET dataset
Model | Full (def) | Rare (def) | None-Rare (def) | Full (ko) | Rare (ko) | None-Rare (ko) | FPS | Download |
---|---|---|---|---|---|---|---|---|
hourglass104 | 23.47 | 16.48 | 25.60 | 27.36 | 20.23 | 29.48 | 9 | model |
Our Results on V-COCO dataset
Model | AProle | Download |
---|---|---|
hourglass104 | 54.7 | model |
@inproceedings{zhong2021glance,
title={Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection},
author={Zhong, Xubin and Qu, Xian and Ding, Changxing and Tao, Dacheng},
booktitle={CVPR},
year={2021}
}