Camera-agnostic ROS2 wrapper for Depth Anything 3 monocular depth estimation.
| Platform | Backend | Model | Resolution | FPS |
|---|---|---|---|---|
| Orin AGX 64GB | PyTorch FP32 | DA3-Small | 518x518 | ~5 |
| Jetson Orin NX 16GB* | TensorRT FP16 | DA3-Small | 518x518 | 23+ / 43+ |
23+ FPS real-world (camera-limited), 43+ FPS processing capacity
*Tested on Seeed reComputer J4012
Jetson Users: Host requires
numpy,pycuda, and TensorRT Python bindings../run.shauto-installs these.
- TensorRT-Optimized: 40+ FPS on Jetson via TensorRT 10.3
- Camera-Agnostic: Works with any camera publishing ROS2 image topics
- One-Click Demo:
./run.shhandles everything automatically - Shared Memory IPC: Low-latency host-container communication (~8ms)
- Multiple Models: DA3-Small, Base, Large with auto hardware detection
- Docker Support: Pre-configured for Jetson deployment
git clone https://github.com/GerdsenAI/Depth-Anything-3-ROS2-Wrapper.git ~/depth_anything_3_ros2
cd ~/depth_anything_3_ros2
./run.shFirst run takes ~15-20 minutes (Docker build + TensorRT engine). Subsequent runs start in ~10 seconds.
Options:
./run.sh --camera /dev/video0 # Specify camera
./run.sh --no-display # Headless mode (SSH)
./run.sh --rebuild # Force rebuild Docker# Clone and install
git clone https://github.com/GerdsenAI/GerdsenAI-Depth-Anything-3-ROS2-Wrapper.git
cd GerdsenAI-Depth-Anything-3-ROS2-Wrapper
bash scripts/install_dependencies.sh
source install/setup.bash
# Run with USB camera
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py \
image_topic:=/camera/image_rawSee Installation Guide for detailed steps.
docker-compose up -d depth-anything-3-gpu
docker exec -it da3_ros2_gpu bash
ros2 launch depth_anything_3_ros2 depth_anything_3.launch.py image_topic:=/camera/image_rawSee Docker Guide for more options.
This project uses a host-container split for optimal Jetson performance:
HOST (JetPack 6.x)
+--------------------------------------------------+
| TRT Inference Service (trt_inference_shm.py) |
| - TensorRT 10.3, ~15ms inference |
+--------------------------------------------------+
^
| /dev/shm/da3 (shared memory)
v
+--------------------------------------------------+
| Docker Container (ROS2 Humble) |
| - Camera drivers, depth publisher |
| - SharedMemoryInferenceFast (~8ms IPC) |
+--------------------------------------------------+
Why: Container TensorRT bindings are broken in current Jetson images. Host TensorRT 10.3 works perfectly.
| Platform | Model | Resolution | Expected FPS | Memory |
|---|---|---|---|---|
| Orin Nano 4GB/8GB | DA3-Small | 308x308 | 40-50 | ~1.2GB |
| Orin NX 8GB | DA3-Small | 308x308 | 50-55 | ~1.2GB |
| Jetson Orin NX 16GB* | DA3-Small | 518x518 | 43+ (validated) | ~1.8GB |
| AGX Orin 32GB/64GB | DA3-Base | 518x518 | 25-35 | ~2.5GB |
*Validated on Seeed reComputer J4012
See Optimization Guide for detailed benchmarks and tuning.
| Topic | Type | Description |
|---|---|---|
~/image_raw |
sensor_msgs/Image | Input RGB image |
| Topic | Type | Description |
|---|---|---|
~/depth |
sensor_msgs/Image | Depth map (32FC1) |
~/depth_colored |
sensor_msgs/Image | Colorized visualization (BGR8) |
~/confidence |
sensor_msgs/Image | Confidence map (32FC1) |
| Parameter | Default | Description |
|---|---|---|
model_name |
depth-anything/DA3-BASE |
Model to use |
device |
cuda |
cuda or cpu |
inference_height |
518 |
Input resolution height |
inference_width |
518 |
Input resolution width |
publish_colored |
true |
Publish colorized depth |
See Configuration Reference for all parameters.
| Guide | Description |
|---|---|
| Installation | Detailed installation steps, offline setup |
| Usage Examples | USB camera, ZED, RealSense, multi-camera |
| Configuration | All parameters, topics, models |
| ROS2 Node Reference | Node lifecycle, QoS, Jetson performance tuning |
| Optimization | Platform benchmarks, performance tuning |
| Jetson Deployment | TensorRT setup, host-container split |
| Docker | Container deployment options |
| Troubleshooting | Common issues and solutions |
- ROS2: Humble Hawksbill (Ubuntu 22.04)
- Python: 3.10+
- TensorRT: 10.3+ (Jetson JetPack 6.x) for production
- CUDA: 12.x (optional for desktop GPU)
- Depth Anything 3 - ByteDance Seed Team (paper)
- NVIDIA TensorRT - High-performance inference
- Jetson Containers - dusty-nv's L4T Docker images
- Hugging Face - Model hosting
Inspired by grupo-avispa/depth_anything_v2_ros2 and scepter914/DepthAnything-ROS.
@article{depthanything3,
title={Depth Anything 3: A New Foundation for Metric and Relative Depth Estimation},
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
journal={arXiv preprint arXiv:2511.10647},
year={2024}
}This ROS2 wrapper: MIT License
Depth Anything 3 models:
- DA3-Small: Apache-2.0 (commercial use OK)
- DA3-Base/Large/Giant: CC-BY-NC-4.0 (non-commercial only)
Contributions welcome! We especially need help with test coverage for the SharedMemory/TensorRT production code paths. See CONTRIBUTING.md for:
- Current test coverage status
- Priority areas needing tests
- How to write and run tests