Hop
Hand-object interaction Pretraining From Videos
Install / Use
/learn @hgaurav2k/HopREADME
Hand-object Interaction Pretraining from Videos
This repo contains code for the paper Hand-object interaction Pretraining from Videos
<!-- Published in the International Conference of Computer Vision and Pattern Recognition (CVPR) 2019. -->For a brief overview, check out the project webpage!
<img src='imgs/approach.png'>For any questions, please contact Himanshu Gaurav Singh.
Setup
- Create conda environment using
conda env create -f env.yml - Install IsaacGym in this environment.
- Download the asset folder and put them in the root directory.
Running the code
Pretraining
- Download the hand-object interaction dataset from here. Extract using
tar -xf hoi_pretraining_data.tar.xz. Put it under the root directory. - Run
bash scripts/pretrain.sh <DATADIR>
Finetuning
- Download pretrained checkpoint from here. You can also use your own trained checkpoint.
- For your choice of
task, runbash scripts/finetune/finetune_{task}.sh.
Visualising trained policies
- Run
bash scripts/run_policy.sh <PATH_TO_POLICY>.
Citation
Acknowledgment
This work was supported by the DARPA Machine Common Sense program, the DARPA Transfer from Imprecise and Abstract Models to Autonomous Technologies (TIAMAT) program, and by the ONR MURI award N00014-21-1-2801. This work was also funded by ONR MURI N00014-22-1-2773. We thank Adhithya Iyer for assistance with teleoperation systems, Phillip Wu for setting-up the real robot, and Raven Huang, Jathushan Rajasegaran and Yutong Bai for helpful discussions.
