Skip to content

Latest commit

 

History

History
46 lines (30 loc) · 1.81 KB

inference.md

File metadata and controls

46 lines (30 loc) · 1.81 KB

Firstly you need to prepare the dataset and pre-trained models as described here.

Reproduce D4LCN + EgoNet on the val split

You need to modify the directories by

cd ${EgoNet_DIR}/configs && vim KITTI_inference:demo.yml

Edit dirs:output to where you want to save the predictions.

Edit dirs:ckpt to your pre-trained model directory.

Edit dataset:root to your KITTI directory.

Finally, go to ${EgoNet_DIR}/tools and run

 python inference.py --cfg "../configs/KITTI_inference:demo.yml"

This will load D4LCN predictions, refine their vehicle orientation predictions and save the results. The official evaluation program will automatically run to produce quantitative performance.

Reproduce results on the test split

You need to modify the directories by

cd ${EgoNet_DIR}/configs && vim KITTI_inference:test_submission.yml

Edit dirs:output to where you want to save the predictions.

Edit dirs:ckpt to your pre-trained model directory.

Edit dataset:root to your KITTI directory.

Finally, go to ${EgoNet_DIR}/tools and run

 python inference.py --cfg "../configs/KITTI_inference:test_submission.yml"

This will load prepared 2D bounding boxes, predict the vehicle orientation and save the predictions.

Now you can zip the results and submit it to the official server!

You can hit 91.23% AOS for the moderate setting! This is the most important metric for joint vehicle detection and pose estimation on KITTI. You achieved this with a single RGB image without extra training data.