This repository contains a ROS (Robot Operating System) node for estimating depth from RGB images using the Depth Anything model. The Depth Anything model is based on LiheYoung's depth_anything model.
- Python 3.x
- ROS (Robot Operating System)
- OpenCV
- PyTorch
- torchvision
- Additionally, you need a robot or camera model that generates /image_raw, either from Gazebo or the real world.
Clone the repository to your ROS2 workspace:
cd your_ws/src
git clone https://github.com/polatztrk/depth_anything_ros.git
cd ..
colcon build
Inside your workspace you can launch this package:
cd your_ws
source install/setup.bash
ros2 launch depth_anything_ros launch_depth_anything.launch.py
Then you can see these as a result in Rviz and Gazebo:
Inside your workspace you can launch this package:
cd your_ws
source install/setup.bash
ros2 launch depth2point launch_depth_to_point.launch.py
Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao, "Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data", arXiv:2401.10891, 2024 ref