Skip to content

generalroboticslab/WildFusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WildFusion: Multimodal Implicit 3D Reconstructions in the Wild

Yanbaihui Liu, Boyuan Chen
Duke University

website | paper | video

Overview

We propose WildFusion, a novel approach for 3D scene reconstruction in unstructured, in-the-wild environments using multimodal implicit neural representations. WildFusion integrates signals from LiDAR, RGB camera, contact microphones, tactile sensors, and IMU. This multimodal fusion generates comprehensive, continuous environmental representations, including pixel-level geometry, color, semantics, and traversability. Through real-world experiments on legged robot navigation in challenging forest environments, WildFusion demonstrates improved route selection by accurately predicting traversability. Our results highlight its potential to advance robotic navigation and 3D mapping in complex outdoor terrains.

Prerequisites

  1. Clone the repository:

    git clone https://github.com/generalroboticslab/WildFusion.git
  2. Create and activate a new virtual environment:

    virtualenv new_env_name
    source new_env_name/bin/activate
  3. Install the required dependencies:

    pip install -r requirements.txt

Training

Run the following command to train the model. The --scratch flag will force training from scratch, while --skip_plot will skip saving training loss plots.

python main.py --scratch --skip_plot

Evaluation

To evaluate the trained models and visualize the results, run:

python evaluation/test.py --test_file /path/to/data

To visualize the ground truth in .pcd format, use:

python evaluation/gt_vis_pcd.py --data_path /path/to/data

Dataset

Download our dataset and unzip

Hardwares

The list of our hardware set and CAD model are under hardwares subdirectory.

Citation

If you think this paper is helpful, please consider cite our work

@misc{liu2024wildfusionmultimodalimplicit3d,
      title={WildFusion: Multimodal Implicit 3D Reconstructions in the Wild}, 
      author={Yanbaihui Liu and Boyuan Chen},
      year={2024},
      eprint={2409.19904},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2409.19904}, 
}

Acknowledgement

go2_ros2_sdk

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages