VoxPoser
VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models
Install / Use
/learn @huangwl18/VoxPoserREADME
VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models
[Project Page] [Paper] [Video]
Wenlong Huang<sup>1</sup>, Chen Wang<sup>1</sup>, Ruohan Zhang<sup>1</sup>, Yunzhu Li<sup>1,2</sup>, Jiajun Wu<sup>1</sup>, Li Fei-Fei<sup>1</sup>
<sup>1</sup>Stanford University, <sup>2</sup>University of Illinois Urbana-Champaign
<img src="media/teaser.gif" width="550">This is the official demo code for VoxPoser, a method that uses large language models and vision-language models to zero-shot synthesize trajectories for manipulation tasks.
In this repo, we provide the implementation of VoxPoser in RLBench as its task diversity best resembles our real-world setup. Note that VoxPoser is a zero-shot method that does not require any training data. Therefore, the main purpose of this repo is to provide a demo implementation rather than an evaluation benchmark.
Note: This codebase currently does not contain the perception pipeline used in our real-world experiments, which produces a real-time mapping from object names to object masks. Instead, it uses the object masks provided as part of RLBench's get_observation function. If you are interested in deploying the code on a real robot, you may find more information in the section Real World Deployment.
If you find this work useful in your research, please cite using the following BibTeX:
@article{huang2023voxposer,
title={VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models},
author={Huang, Wenlong and Wang, Chen and Zhang, Ruohan and Li, Yunzhu and Wu, Jiajun and Fei-Fei, Li},
journal={arXiv preprint arXiv:2307.05973},
year={2023}
}
Setup Instructions
Note that this codebase is best run with a display. For running in headless mode, refer to the instructions in RLBench.
- Create a conda environment:
conda create -n voxposer-env python=3.9
conda activate voxposer-env
-
See Instructions to install PyRep and RLBench (Note: install these inside the created conda environment).
-
Install other dependencies:
pip install -r requirements.txt
- Obtain an OpenAI API key, and put it inside the first cell of the demo notebook.
Running Demo
Demo code is at src/playground.ipynb. Instructions can be found in the notebook.
Code Structure
Core to VoxPoser:
playground.ipynb: Playground for VoxPoser.LMP.py: Implementation of Language Model Programs (LMPs) that recursively generates code to decompose instructions and compose value maps for each sub-task.interfaces.py: Interface that provides necessary APIs for language models (i.e., LMPs) to operate in voxel space and to invoke motion planner.planners.py: Implementation of a greedy planner that plans a trajectory (represented as a series of waypoints) for an entity/movable given a value map.controllers.py: Given a waypoint for an entity/movable, the controller applies (a series of) robot actions to achieve the waypoint.dynamics_models.py: Environment dynamics model for the case where entity/movable is an object or object part. This is used incontrollers.pyto perform MPC.prompts/rlbench: Prompts used by the different Language Model Programs (LMPs) in VoxPoser.
Environment and utilities:
envs:rlbench_env.py: Wrapper of RLBench env to expose useful functions for VoxPoser.task_object_names.json: Mapping of object names exposed to VoxPoser and their corresponding scene object names for each individual task.
configs/rlbench_config.yaml: Config file for all the involved modules in RLBench environment.arguments.py: Argument parser for the config file.LLM_cache.py: Caching of language model outputs that writes to disk to save cost and time.utils.py: Utility functions.visualizers.py: A Plotly-based visualizer for value maps and planned trajectories.
Real-World Deployment
To adapt the code to deploy on a real robot, most changes should only happen in the environment file (e.g., you can consider making a copy of rlbench_env.py and implementing the same APIs based on your perception and controller modules).
Our perception pipeline consists of the following modules: OWL-ViT for open-vocabulary detection in the first frame, SAM for converting the produced bounding boxes to masks in the first frame, and XMEM for tracking the masks over time for the subsequent frames. Now you may consider simplifying the pipeline using only an open-vocabulary detector and SAM 2 for segmentation and tracking. Our controller is based on the OSC implementation from Deoxys. More details can be found in the paper.
To avoid compounded latency introduced by different modules (especially the perception pipeline), you may also consider running a concurrent process that only performs tracking.
Acknowledgments
- Environment is based on RLBench.
- Implementation of Language Model Programs (LMPs) is based on Code as Policies.
- Some code snippets are from Where2Act.
Related Skills
node-connect
348.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
348.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
348.5kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
