ARiADNE is A Reinforcement learning apporach using Attention-based Deep Network. It is designed to tackle Lidar-based autonomous single-robot exploration in 2D action space. ARiADNE builds an informative graph based on the partial exploration map as the input of the attention-based neural network. The neural network trained by deep reinforcement learning will select one of the neighboring nodes as the next waypoint iteratively. The robot will follow the planned waypoint to explore the environment. The planner is trained to estimate the long-term exploration efficiency of each potential waypoint and find a policy that can maximize the expectation of the predicted exploration efficiency. The informative graph can be rarefied to make it applicable in relatively larger-scale applications.
If you are interested in more details, please check our related publications in ICRA2023 and RAL.
This repo contains the source code for ARiADNE planner in ROS1.
Here is a demo video showing ARiADNE planner exploring the indoor environment provided by TARE. The video is playing at the original speed. This experiment was conducted on an ASUS mini PC with an Intel i7-12700H CPU.
demo_indoor.mp4
In this demo, the robot traveled 948m in 545s when the exploration was completed. The average planning time was 0.17s. The average waypoint publishing frequency was around 2.5Hz. You can find the per-step record here. The final exploration trajectory is shown below.
The largest environment we tested is the tunnel environment as below:
tunnel.mp4
We tested this planner on Ubuntu 18.04 ROS Melodic and Ubuntu 20.04 ROS Noetic. In particular, our planner relies on Octomap to transfer pointcloud to occupancy grid map:
sudo apt-get install ros-noetic-octomap
We recommend to use conda for package management. Our planner is coded in Python and based on Pytorch. Other than Pytorch, please install following packages by:
pip install scikit-image matplotlib
We tested our planner in various version of these packages so you can just install the latest one. Then you can download this repo and compile it.
git clone https://github.com/marmotlab/ARiADNE-ROS-Planner.git
cd ARiADNE-ROS-Planner
catkin_make
Note: We only use CPU to do the network inference, so you do not need a GPU.
In practice, our planner needs to cooperate with a Lidar SLAM module which outputs sensor odometry and Lidar scan, and a waypoint follower module which navigates the robot to the planned waypoint. Fortunately, you can test our planner easily in the development environments provided by CMU Robotics Institute.
Please follow instructions for CMU Development Environment to set up the Gazebo simulation, SLAM module, and waypoint follower module.
To run the development environment, go to the development environment folder in a terminal and run:
source devel/setup.bash
roslaunch vehicle_simulator system_indoor.launch
Our planner can work in three of their environments: indoor, forest, and tunnel.
To run ARiADNE planner, go to the planner folder in another terminal (launch your conda environment if any) and run:
source devel/setup.bash
roslaunch rl_planner rl_planner.launch
if running the planner in the forest environment, run:
roslaunch rl_planner rl_planner_forest.launch
if running the planner in the tunnel environment, run:
roslaunch rl_planner rl_planner_tunnel.launch
As long as the starting coordination of the robot is parameter.py.
Here are examples of applying ARiADNE planner in CMU forest and tunnel and two indoor scenarios provided by FAEL.
You can train your own networks using ARiADNE or its ground truth critic variant. To run the trained model, replace the checkpoint file under the model folder.
If you find our work helpful or enlightening, feel free to cite our paper:
@inproceedings{cao2023ariadne,
title={Ariadne: A reinforcement learning approach using attention-based deep networks for exploration},
author={Cao, Yuhong and Hou, Tianxiang and Wang, Yizhuo and Yi, Xian and Sartoretti, Guillaume},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
pages={10219--10225},
year={2023},
organization={IEEE}
}
@article{cao2024deep,
title={Deep Reinforcement Learning-based Large-scale Robot Exploration},
author={Cao, Yuhong and Zhao, Rui and Wang, Yizhuo and Xiang, Bairan and Sartoretti, Guillaume},
journal={IEEE Robotics and Automation Letters},
year={2024},
publisher={IEEE}
}
Note: I am seeking to implement this planner in C++. Feel free to contact me if you are interested in collaboration.
Development environment is from CMU.
Octomap is from University of Freiburg.
Quad tree is from Daniel Lindsley.
ChatGPT also contributes some code here and there.