Skip to content

ZJU-FAST-Lab/Radar-Diffusion

Repository files navigation

Radar-Diffusion: Towards Dense and Accurate Radar Perception Via Efficient Cross-modal Diffusion Model

News

  • 25 June, 2024: Paper accepted by IEEE Robotics and Automation Letters (RA-L) !
  • 27 July, 2024: Code and pre-trained models released!
  • 29 August, 2024: Updating coloradar dataset download link.
  • 18 October, 2024: Updating checkpoint download link in case that you fail to download the checkpoints uploaded to this git repo.

TODO

  • Release training and testing code for Radar-Diffusion.
  • Release pre-trained models in diffusion_consistency_radar/checkpoint.
  • Release user guide.
  • Release data pre-processing code.
  • Release performance evaluation code.

Introduction

This repository contains the source code and pre-trained models of Radar-Diffusion described in our paper "Towards Dense and Accurate Radar Perception Via Efficient Cross-modal Diffusion Model." accepted by IEEE Robotics and Automation Letters (RA-L), 2024.

Authors: Ruibin Zhang*, Donglai Xue*, Yuhan Wang, Ruixu Geng, and Fei Gao ( * equal contributors )

Paper: arXiv, IEEE

Supplementary Video: YouTube, Bilibili.

Abstract: Millimeter wave (mmWave) radars have attracted significant attention from both academia and industry due to their capability to operate in extreme weather conditions. However, they face challenges in terms of sparsity and noise interference, which hinder their application in the field of micro aerial vehicle (MAV) autonomous navigation. To this end, this paper proposes a novel approach to dense and accurate mmWave radar point cloud construction via cross-modal learning. Specifically, we introduce diffusion models, which possess state-of-the-art performance in generative modeling, to predict LiDAR-like point clouds from paired raw radar data. We also incorporate the most recent diffusion model inference accelerating techniques to ensure that the proposed method can be implemented on MAVs with limited computing resources. We validate the proposed method through extensive benchmark comparisons and real-world experiments, demonstrating its superior performance and generalization ability..

User Guide

Quick Start

git clone https://github.com/ZJU-FAST-Lab/Radar-Diffusion.git
cd diffusion_consistency_radar
pip install -e .
sh launch/inference_cd_example_batch.sh

In case of network issues, you can manually download the checkpoints in diffusion_consistency_radar/checkpoint.

The above script runs consistency inference in only 1 step using the pre-trained checkpoint. After that, you can find the predicted results and Ground-Truth LiDAR bev point clouds in diffusion_consistency_radar/inference_results.

Dataset Pre-processing

  1. First, download the Coloradar dataset (kitti format). In case of network issues, we share a download link here.
  2. Unzip all the subsequences in a folder, then run:
python Coloradar_pre_processing/generate_coloradar_timestamp_index.py
  1. Download patchwork++ to Coloradar_pre_processing/patchwork-plusplus. Then install patchwork++ by running:
cd Coloradar_pre_processing/patchwork-plusplus
make pyinstall
  1. Generate pre-processed dataset by running:
python Coloradar_pre_processing/dataset_generation_coloradar.py

Train and test Radar-Diffusion

  1. Train a regular EDM model:
sh diffusion_consistency_radar/launch/train_edm.sh 
  1. Distill a CD model from the above EDM model:
sh diffusion_consistency_radar/launch/train_cd.sh 
  1. Inference from an EDM model:
sh diffusion_consistency_radar/launch/inference_edm.sh
  1. Inference from a CD model in one step:
sh diffusion_consistency_radar/launch/inference_cd.sh

Licence

The source code is released under MIT license.

Acknowledgments

  1. The diffusion-consistendy model code is heavily based on consistency_models.
  2. The radar pre-processing code is heavily based on azinke/coloradar.

Cite

If you find this method and/or code useful, please consider citing

@article{zhang2024towards,
  title={Towards Dense and Accurate Radar Perception Via Efficient Cross-Modal Diffusion Model},
  author={Zhang, Ruibin and Xue, Donglai and Wang, Yuhan and Geng, Ruixu and Gao, Fei},
  journal={arXiv preprint arXiv:2403.08460},
  year={2024}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published