You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repository contains a ROS (Robot Operating System) node for estimating depth from RGB images using the Depth Anything model. The Depth Anything model is based on LiheYoung's depth_anything model.
Prerequisites
Python 3.x
ROS (Robot Operating System)
OpenCV
PyTorch
torchvision
Additionally, you need a robot or camera model that generates /image_raw, either from Gazebo or the real world.
Installation
Clone the repository to your ROS2 workspace:
cd your_ws/src
git clone https://github.com/polatztrk/depth_anything_ros.git
cd ..
colcon build
Usage
Inside your workspace you can launch this package:
cd your_ws
source install/setup.bash
ros2 launch depth_anything_ros launch_depth_anything.launch.py
Then you can see these as a result in Rviz and Gazebo:
Depth map to point cloud converter for Rviz
Inside your workspace you can launch this package:
cd your_ws
source install/setup.bash
ros2 launch depth2point launch_depth_to_point.launch.py
Cite
Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao, "Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data", arXiv:2401.10891, 2024 ref