This repo stores code used in the paper
Our system environment is provided in environment.yaml for consideration.
The Linemod (lm), Linemod-Occluded (lmo) and YCB-Video (ycbv) datasets can be downloaded from the BOP website. The paths to the datasets should then be specified in the cfg.yaml file.
For better initialisation, the pretrained hrnet backbone file can be downloaded from here.
To train for the lm test set in distrubted mode
python -m torch.distributed.launch --nproc_per_node=<num_gpus_to_use> --use_env main_lm.py --cfg cfg.yaml --obj duck --log_name <name_this_experiment>
To train for the lmo test set in single GPU mode
CUDA_VISIBLE_DEVICES=<which_gpu> python main_lmo.py --cfg cfg.yaml --obj ape --log_name <name_this_experiment>
To load trained model and test on the lmo dataset
CUDA_VISIBLE_DEVICES=<which_gpu> python main_lmo.py --cfg cfg.yaml --obj cat --log_name <which_experiment_to_load> --resume --test-only
To train for the ycbv test set for object 01
python -m torch.distributed.launch --nproc_per_node=<num_gpus_to_use> --use_env main_ycbv.py --cfg cfg.yaml --obj 01 --log_name <name_this_experiment>
To compute AUC for a ycbv test result for object 20
python analysis.py --cfg cfg.yaml --log_name <which_experiment_to_load> --obj 20
@inproceedings{chen2022occlusion,
Author = {Chen, Bo and Chin, Tat-Jun and Klimavicius, Marius},
Title = {Occlusion-Robust Object Pose Estimation with Holistic Representation},
Booktitle = {WACV},
Year = {2022}
}