Rofunc
๐ค The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
Install / Use
/learn @Skylark0924/RofuncREADME

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20models-hugging%20face-F8D521">
Repository address: https://github.com/Skylark0924/Rofunc <br> Documentation: https://rofunc.readthedocs.io/
<img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspSpatulaRofuncRLPPO.gif" width=25% /><img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspPower_drillRofuncRLPPO.gif" width=25% /><img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspPhillips_Screw_DriverRofuncRLPPO.gif" width=25% /><img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspLarge_clampRofuncRLPPO.gif" width=25% /> <img src="doc/img/task_gif/CURICoffeeStirring.gif" width=33.3% /><img src="doc/img/task_gif/CURIScrew.gif" width=33.3% /><img src="doc/img/task_gif/CURITaichiPushingHand.gif" width=33.3% /> <img src="doc/img/task_gif/UDH_Random_Motion.gif" width=25% /><img src="doc/img/task_gif/H1_Random_Motion.gif" width=25% /><img src="doc/img/task_gif/Bruce_Random_Motion.gif" width=25% /><img src="doc/img/task_gif/Walker_Random_Motion.gif" width=25% /> <img src="doc/img/task_gif/HumanoidFlipRofuncRLAMP.gif" width=33.3% /><img src="doc/img/task_gif/HumanoidDanceRofuncRLAMP.gif" width=33.3% /><img src="doc/img/task_gif/HumanoidRunRofuncRLAMP.gif" width=33.3% /> <img src="doc/img/task_gif/HumanoidASEHeadingSwordShieldRofuncRLASE.gif" width=33.3% /><img src="doc/img/task_gif/HumanoidASEStrikeSwordShieldRofuncRLASE.gif" width=33.3% /><img src="doc/img/task_gif/HumanoidASELocationSwordShieldRofuncRLASE.gif" width=33.3% /> <img src="doc/img/task_gif/BiShadowHandLiftUnderarmRofuncRLPPO.gif" width=33.3% /><img src="doc/img/task_gif/BiShadowHandDoorOpenOutwardRofuncRLPPO.gif" width=33.3% /><img src="doc/img/task_gif/BiShadowHandSwingCupRofuncRLPPO.gif" width=33.3% />
Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including
demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an
IsaacGym and OmniIsaacGym based robot simulator for evaluation. This package aims to advance the field by building a full-process
toolkit and validation platform that simplifies and standardizes the process of demonstration data collection,
processing, learning, and its deployment on robots.

- Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
Citation
If you use rofunc in a scientific publication, we would appreciate citations to the following paper:
@article{liu2023rofunc,
title={Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
author={Liu, Junjia and Dong, Zhipeng and Li, Chenzui and Li, Zhihao and Yu, Minghao and Delehelle, Donatien and Chen, Fei},
year={2023},
journal={Zenodo, https://github.com/Skylark0924/Rofunc},
doi={10.5281/zenodo.10016946},
}
[!WARNING] If our code is found to be used in a published paper without proper citation, we reserve the right to address this issue formally by contacting the editor to report potential academic misconduct!
ๅฆๆๆไปฌ็ไปฃ็ ่ขซๅ็ฐ็จไบๅทฒๅ่กจ็่ฎบๆ่ๆฒกๆ่ขซๆฐๅฝๅผ็จ๏ผๆไปฌไฟ็้่ฟๆญฃๅผ่็ณป็ผ่พๆฅๅๆฝๅจๅญฆๆฏไธ็ซฏ่กไธบ็ๆๅฉใ
Update News ๐๐๐
- [2024-12-24] ๐ฎ Start trying to support Genesis simulator.
v0.0.2.6 Support dexterous grasping and human-humanoid robot skill transfer
- [2024-12-20] ๐๐ Human-level skill transfer from human to heterogeneous humanoid robots have been completed and are awaiting release. Preview
- [2024-01-24] ๐ CURI Synergy-based Softhand grasping tasks are supported to be trained by
RofuncRL. - [2023-10-31] ๐
RofuncRL: A modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks is released. It has been tested with simulators likeOpenAIGym,IsaacGym,OmniIsaacGym(see example gallery), and also differentiable simulators likePlasticineLabandDiffCloth. - ...
- If you want to know more about the update news, please refer to the changelog.
Installation
Please refer to the installation guide.
Documentation
To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi
from human demonstration. You can find it in the Quick start
section of the documentation.
Note โ : Achieved ๐: Reformatting โ: TODO
| Data | | Learning | | P&C | | Tools | | Simulator | |
|:-------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------:|----|:------------------------------------------------------------------------------------------------------------------:|----|:-------------------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------------:|----|
| xsens.record | โ
| DMP | โ | LQT | โ
| config | โ
| Franka | โ
|
| xsens.export | โ
| GMR | โ
| LQTBi | โ
| logger | โ
| CURI | โ
|
| xsens.visual | โ
| TPGMM | โ
| LQTFb | โ
| datalab | โ
| CURIMini | ๐ |
| [`opti.reco
