UnseenObjectClustering
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation
Install / Use
/learn @NVlabs/UnseenObjectClusteringREADME
Unseen Object Clustering: Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation
Introduction
In this work, we propose a new method for unseen object instance segmentation by learning RGB-D feature embeddings from synthetic data. A metric learning loss functionis utilized to learn to produce pixel-wise feature embeddings such that pixels from the same object are close to each other and pixels from different objects are separated in the embedding space. With the learned feature embeddings, a mean shift clustering algorithm can be applied to discover and segment unseen objects. We further improve the segmentation accuracy with a new two-stage clustering algorithm. Our method demonstrates that non-photorealistic synthetic RGB and depth images can be used to learn feature embeddings that transfer well to real-world images for unseen object instance segmentation. arXiv, Talk video
<p align="center"><img src="./data/pics/network.png" width="750" height="200"/></p>License
Unseen Object Clustering is released under the NVIDIA Source Code License (refer to the LICENSE file for details).
Citation
If you find Unseen Object Clustering useful in your research, please consider citing:
@inproceedings{xiang2020learning,
Author = {Yu Xiang and Christopher Xie and Arsalan Mousavian and Dieter Fox},
Title = {Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation},
booktitle = {Conference on Robot Learning (CoRL)},
Year = {2020}
}
Required environment
- Ubuntu 16.04 or above
- PyTorch 0.4.1 or above
- CUDA 9.1 or above
Installation
-
Install PyTorch.
-
Install python packages
pip install -r requirement.txt
Download
- Download our trained checkpoints from here, save to $ROOT/data.
Running the demo
-
Download our trained checkpoints first.
-
Run the following script for testing on images under $ROOT/data/demo.
./experiments/scripts/demo_rgbd_add.sh
Training and testing on the Tabletop Object Dataset (TOD)
-
Download the Tabletop Object Dataset (TOD) from here (34G).
-
Create a symlink for the TOD dataset
cd $ROOT/data ln -s $TOD_DATA tabletop -
Training and testing on the TOD dataset
cd $ROOT # multi-gpu training, we used 4 GPUs ./experiments/scripts/seg_resnet34_8s_embedding_cosine_rgbd_add_train_tabletop.sh # testing, $GPU_ID can be 0, 1, etc. ./experiments/scripts/seg_resnet34_8s_embedding_cosine_rgbd_add_test_tabletop.sh $GPU_ID $EPOCH
Testing on the OCID dataset and the OSD dataset
-
Download the OCID dataset from here, and create a symbol link:
cd $ROOT/data ln -s $OCID_dataset OCID -
Download the OSD dataset from here, and create a symbol link:
cd $ROOT/data ln -s $OSD_dataset OSD -
Check scripts in experiments/scripts with name test_ocid or test_ocd. Make sure the path of the trained checkpoints exist.
experiments/scripts/seg_resnet34_8s_embedding_cosine_rgbd_add_test_ocid.sh experiments/scripts/seg_resnet34_8s_embedding_cosine_rgbd_add_test_osd.sh
Running with ROS on a Realsense camera for real-world unseen object instance segmentation
-
Python2 is needed for ROS.
-
Make sure our pretrained checkpoints are downloaded.
# start realsense roslaunch realsense2_camera rs_aligned_depth.launch tf_prefix:=measured/camera # start rviz rosrun rviz rviz -d ./ros/segmentation.rviz # run segmentation, $GPU_ID can be 0, 1, etc. ./experiments/scripts/ros_seg_rgbd_add_test_segmentation_realsense.sh $GPU_ID
Our example:
<p align="center"><img src="./data/pics/unseen_clustering.gif"/></p>Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
18.8kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
