DexterousHands
This is a library that provides dual dexterous hand manipulation tasks through Isaac Gym
Install / Use
/learn @PKU-MARL/DexterousHandsREADME
Bi-DexHands: Bimanual Dexterous Manipulation via Reinforcement Learning
<img src="assets/image_folder/coverv3.jpg" width="1000" border="1"/>Update
[2023/02/09] We re-package the Bi-DexHands. Now you can call the Bi-DexHands' environments not only on the command line, but also in your Python script. check our README Use Bi-DexHands in Python scripts below.
[2022/11/24] Now we support visual observation for all the tasks, check this document for visual input.
[2022/10/02] Now we support for the default IsaacGymEnvs RL library rl-games, check our README below.
Bi-DexHands (click bi-dexhands.ai) provides a collection of bimanual dexterous manipulations tasks and reinforcement learning algorithms. Reaching human-level sophistication of hand dexterity and bimanual coordination remains an open challenge for modern robotics researchers. To better help the community study this problem, Bi-DexHands are developed with the following key features:
- Isaac Efficiency: Bi-DexHands is built within Isaac Gym; it supports running thousands of environments simultaneously. For example, on one NVIDIA RTX 3090 GPU, Bi-DexHands can reach 40,000+ mean FPS by running 2,048 environments in parallel.
- Comprehensive RL Benchmark: we provide the first bimanual manipulation task environment for RL, MARL, Multi-task RL, Meta RL, and Offline RL practitioners, along with a comprehensive benchmark for SOTA continuous control model-free RL/MARL methods. See example
- Heterogeneous-agents Cooperation: Agents in Bi-DexHands (i.e., joints, fingers, hands,...) are genuinely heterogeneous; this is very different from common multi-agent environments such as SMAC where agents can simply share parameters to solve the task.
- Task Generalization: we introduce a variety of dexterous manipulation tasks (e.g., handover, lift up, throw, place, put...) as well as enormous target objects from the YCB and SAPIEN dataset (>2,000 objects); this allows meta-RL and multi-task RL algorithms to be tested on the task generalization front.
- Point Cloud: We provide the ability to use point clouds as observations. We used the depth camera in Isaacc Gym to get the depth image and then convert it to partial point cloud. We can customize the pose and numbers of depth cameras to get point cloud from difference angles. The density of generated point cloud depends on the number of the camera pixels. See the visual input docs.
- Quick Demos
Contents of this repo are as follows:
- Installation <!-- - [Install from PyPI](#Install-from-PyPI) -->
- Introduction to Bi-DexHands
- Overview of Environments
- Overview of Algorithms
- Getting Started
- Enviroments Performance
- Offline RL Datasets
- Use rl_games to train our tasks
- Future Plan
- Customizing your Environments
- How to change the type of dexterous hand
- How to add a robotic arm drive to the dexterous hand
- The Team
- License <br></br>
For more information about this work, please check our paper.
Installation
Details regarding installation of IsaacGym can be found here. We currently support the Preview Release 3/4 version of IsaacGym.
Pre-requisites
The code has been tested on Ubuntu 18.04/20.04 with Python 3.7/3.8. The minimum recommended NVIDIA driver
version for Linux is 470.74 (dictated by support of IsaacGym).
It uses Anaconda to create virtual environments. To install Anaconda, follow instructions here.
Ensure that Isaac Gym works on your system by running one of the examples from the python/examples
directory, like joint_monkey.py. Please follow troubleshooting steps described in the Isaac Gym Preview Release 3/4
install instructions if you have any trouble running the samples.
Once Isaac Gym is installed and samples work within your current python environment, install this repo:
<!-- #### Install from PyPI Bi-DexHands is hosted on PyPI. It requires Python >= 3.7. You can simply install Bi-DexHands from PyPI with the following command: ```bash pip install bidexhands ``` -->Install from source code
You can also install this repo from the source code:
pip install -e .
Introduction
This repository contains complex dexterous hands control tasks. Bi-DexHands is built in the NVIDIA Isaac Gym with high performance guarantee for training RL algorithms. Our environments focus on applying model-free RL/MARL algorithms for bimanual dexterous manipulation, which are considered as a challenging task for traditional control methods.
Getting Started
<span id="task">Tasks</span>
Source code for tasks can be found in envs/tasks. The detailed settings of state/action/reward are in here.
So far, we release the following tasks (with many more to come):
| Environments | Description | Demo | | :----: | :----: | :----: | |ShadowHand Over| These environments involve two fixed-position hands. The hand which starts with the object must find a way to hand it over to the second hand. | <img src="assets/image_folder/0v2.gif" width="250"/> | |ShadowHandCatch Underarm|These environments again have two hands, however now they have some additional degrees of freedom that allows them to translate/rotate their centre of masses within some constrained region. | <img src="assets/image_folder/hand_catch_underarmv2.gif" align="middle" width="250"/> | |ShadowHandCatch Over2Underarm| This environment is made up of half ShadowHandCatchUnderarm and half ShadowHandCatchOverarm, the object needs to be thrown from the vertical hand to the palm-up hand | <img src="assets/image_folder/2v2.gif" align="middle" width="250"/> | |ShadowHandCatch Abreast| This environment is similar to ShadowHandCatchUnderarm, the difference is that the two hands are changed from relative to side-by-side posture. | <img src="assets/image_folder/1v2.gif" align="middle" width="250"/> | |ShadowHandCatch TwoCatchUnderarm| These environments involve coordination between the two hands so as to throw the two objects between hands (i.e. swapping them). | <img src="assets/image_folder/two_catchv2.gif" align="middle" width="250"/> | |ShadowHandLift Underarm | This environment requires grasping the pot handle with two hands and lifting the pot to the designated position | <img src="assets/image_folder/3v2.gif" align="middle" width="250"/> | |ShadowHandDoor OpenInward | This environment requires the closed door to be opened, and the door can only be pulled inwards | <img src="assets/image_folder/door_open_inwardv2.gif" align="middle" width="250"/> | |ShadowHandDoor OpenOutward | This environment requires a closed door to be opened and the door can only be pushed outwards | <img src="assets/image_folder/open_outwardv2.gif" align="middle" width="250"/> | |ShadowHandDoor CloseInward | This environment requires the open door to be closed, and the door is initially open inwards | <img src="assets/image_folder/close_inwardv2.gif" align="middle" width="250"/> | |ShadowHand BottleCap | This environment involves two hands and a bottle, we need to hold the bottle with one hand and open the bottle cap with the other hand | <img src="assets/image_folder/bottle_capv2.gif" align="middle" width="250"/> | |ShadowHandPush Block | This environment requires both hands to touch the block and push it forward | <img src="assets/image_folder/push_block.gif" align="middle" width="250"/> | |ShadowHandOpen Scissors | This environment requires both hands to cooperate to open the scissors | <img src="assets/image_folder/scissors.gif" align="middle" width="250"/> | |ShadowHandOpen PenCap | This environment requires both hands to cooperate to open the pen cap | <img src="assets/image_folder/pen.gif" align="middle" width="250"/> | |ShadowHandSwing Cup | This environment requires two hands to hold the cup handle and rotate it 90 degrees | <img src="assets/image_folder/swing_cup.gif" align="middle" width="250"/> | |ShadowHandTurn Botton | This environment requires both hands to press the button |
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
openclaw-plugin-loom
Loom Learning Graph Skill This skill guides agents on how to use the Loom plugin to build and expand a learning graph over time. Purpose - Help users navigate learning paths (e.g., Nix, German)
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
