Official Repository for "MonoWAD: Weather-Adaptive Diffusion Model for Robust Monocular 3D Object Detection".
Create MonoWAD environment:
git clone https://github.com/VisualAIKHU/MonoWAD.git
cd MonoWAD
conda create -n monowad python=3.10
conda activate monowad
Install pytorch and requirements & build:
# We adopt torch 2.0.1
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
pip install -r requirements.txt
./make.sh
Please download the official KITTI dataset. You can also download our Foggy KITTI dataset with different fog densities.
Foggy KITTI dataset:
- Foggy 0.1 (The main paper uses 0.1)
- Foggy 0.05
- Foggy 0.15
- Foggy 0.30
- Foggy test
Directory structure:
#MonoWAD_ROOT
|data/
|KITTI/
|object/
|training/
|calib/
|foggy_2/ #adverse weather images
|origin_2/ #clear images
|label_2/
|velodyne/
|testing/
|calib/
|image_2/
Data preprocessing:
./launchers/det_precompute.sh config/config.py train
python scripts/depth_gt_compute.py --config=config/config.py
You can modify the settings of models and training in configs/config.py
# You can modify the GPU_DEVICE (0 is default).
./train.sh 0 MonoWAD_train_val
We provide pre-trained models and place it in './workdirs/MonoWAD/checkpoint/'
# .sh GPU_DEVICE | WEIGHT_NAME | TEST_WEATHER (0 | MonoWAD_3D_latest | clear is default).
./test.sh 0 latest clear
If you use MonoWAD, please consider citing:
@article{oh2024monowad,
title={MonoWAD: Weather-Adaptive Diffusion Model for Robust Monocular 3D Object Detection},
author={Oh, Youngmin and Kim, Hyung-Il and Kim, Seong Tae and Kim, Jung Uk},
journal={arXiv preprint arXiv:2407.16448},
year={2024}
}
Our codes benefits from the excellent visualDet3D, MonoDTR, denoising-diffusion-pytorch .