Monodepth
Python ROS depth estimation from RGB image based on code from the paper "High Quality Monocular Depth Estimation via Transfer Learning"
Install / Use
/learn @tentone/MonodepthREADME
Mono Depth ROS
- ROS node used to estimated depth from monocular RGB data.
- Should be used with Python 2.X and ROS
- The original code is at the repository Dense Depth Original Code
- High Quality Monocular Depth Estimation via Transfer Learning by Ibraheem Alhashim and Peter Wonka
<img src="https://raw.githubusercontent.com/tentone/monodepth/master/readme/c.png" width="370"><img src="https://raw.githubusercontent.com/tentone/monodepth/master/readme/d.png" width="370">
Configuration
- Topics subscribed by the ROS node
- /image/camera_raw - Input image from camera (can be changed on the parameter topic_color)
- Topics published by the ROS node, containing depth and point cloud data generated.
- /image/depth - Image message containing the depth image estimated (can be changed on the parameter topic_depth).
- /pointcloud - Pointcloud2 message containing a estimated point cloud (can be changed on the parameter topic_pointcloud).
- Parameters that can be configurated
- frame_id - TF Frame id to be published in the output messages.
- debug - If set true a window with the output result if displayed.
- min_depth, max_depth - Min and max depth values considered for scaling.
- batch_size - Batch size used when predicting the depth image using the model provided.
- model_file - Keras model file used, relative to the monodepth package.
Setup
- Install Python 2 and ROS dependencies
apt-get install python python-pip curl
pip install rosdep rospkg rosinstall_generator rosinstall wstool vcstools catkin_tools catkin_pkg
- Install project dependencies
pip install tensorflow keras pillow matplotlib scikit-learn scikit-image opencv-python pydot GraphViz tk
- Clone the project into your ROS workspace and download pretrained models
git clone https://github.com/tentone/monodepth.git
cd monodepth/models
curl –o nyu.h5 https://s3-eu-west-1.amazonaws.com/densedepth/nyu.h5
Launch
- Example ROS launch entry provided bellow, for easier integration into your already existing ROS launch pipeline.
<node pkg="monodepth" type="monodepth.py" name="monodepth" output="screen" respawn="true">
<param name="topic_color" value="/camera/image_raw"/>
<param name="topic_depth" value="/camera/depth"/>
</node>
Pretrained models
- Pre-trained keras models can be downloaded and placed in the /models folder from the following links:
Datasets for training
- NYU Depth V2 (50K)
- The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
- Download dataset (4.1 GB)
- KITTI Dataset (80K)
- Datasets captured by driving around the mid-size city of Karlsruhe, in rural areas and on highways. Up to 15 cars and 30 pedestrians are visible per image.
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
research_rules
Research & Verification Rules Quote Verification Protocol Primary Task "Make sure that the quote is relevant to the chapter and so you we want to make sure that we want to have it identifie
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
