SkillAgentSearch skills...

SKIL

[RSS 2025] SKIL: Semantic Keypoint Imitation Learning for Generalizable, Data‑efficient Robot Manipulation

Install / Use

/learn @SKIL-robotics/SKIL
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

SKIL: Semantic Keypoint Imitation Learning for Generalizable, Data‑efficient Robot Manipulation

[Project page] [Arxiv]

Shengjie Wang<sup>1,2,3</sup>, Jiacheng You<sup>1,2,3</sup>, Yihang Hu<sup>1</sup>, Jiongye Li<sup>1</sup>, Yang Gao<sup>1,2,3</sup>

<sup>1</sup>Tsinghua University, <sup>2</sup>Shanghai Qi Zhi Institute, <sup>3</sup>Shanghai AI Laboratory

<img src="media/teaser.png" alt="drawing" width="100%"/>

🚀 Key Contributions

  • We propose the Semantic Keypoint Imitation Learning (SKIL) framework, which automatically obtains the semantic keypoints through a vision foundation model, and forms the descriptor of semantic keypoints for downstream policy learning.

    • The sparsity of semantic keypoint representations enables data-efficient learning.
    • The proposed descriptor of semantic keypoints enhances the policy’s robustness.
    • Such semantic representations enable effective learning from cross-embodiment human and robot videos.
  • SKIL shows a remarkable improvement over previous methods in 6 real-world tasks, by achieving a success rate of 72.8% during testing, offering a 146% increase compared to baselines. SKIL can perform long-horizon tasks such as hanging a towel or cloth on a rack, with as few as 30 demonstrations, where previous methods fail completely.

<img src="media/method.png" alt="drawing" width="100%"/>

🧩 Install

  1. Clone the repository
# Clone the repo
git clone https://github.com/your-org/SKIL.git
cd SKIL
  1. Create and activate the conda environment
conda env create -f conda_environment.yml
conda activate skill
  1. install mujoco in ~/.mujoco
cd ~/.mujoco
wget https://github.com/deepmind/mujoco/releases/download/2.1.0/mujoco210-linux-x86_64.tar.gz -O mujoco210.tar.gz --no-check-certificate
tar -xvzf mujoco210.tar.gz

and put the following into your bash script (usually in YOUR_HOME_PATH/.bashrc). Remember to source ~/.bashrc to make it work and then open a new terminal.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HOME}/.mujoco/mujoco210/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export MUJOCO_GL=egl

and then install mujoco-py (in the folder of third_party):

cd YOUR_PATH_TO_THIRD_PARTY
cd mujoco-py-2.1.2.14
pip install -e .
  1. Install third-party dependencies (Metaworld)
cd third_party/metaworld
pip install -e .

⚙️ Usage

We illustrate the simulation evaluation pipeline using the Metaworld Hammer task as an example. The full process involves four main steps:

1. Generate Expert Demonstrations

Run the following script to generate expert demonstrations for the Hammer task:

bash scripts/generate_data/generate_metaworld_data.sh
  • 💡 If you're using a different Metaworld environment, modify the task_lst variable inside the script accordingly.

2. One-time Selection of Semantic Keypoints

Navigate to the keypoint generation folder:

cd scripts/generate_kp

There are two ways to annotate keypoints:

Option A: Manual Keypoint Selection

Launch the interactive notebook:

jupyter notebook draw_kp_metaworld_skil.ipynb

Select 10 task-relevant keypoints by clicking on the object in the provided visualization interface.

Option B: Automatic Keypoint Extraction (SAM + KMeans)

If the Segment Anything Model (SAM) is installed, you can automatically generate keypoints via KMeans clustering on extracted object masks:

jupyter notebook draw_kp_metaworld_skil_kmeans.ipynb

3. Preprocess Data into Zarr Format

Convert the raw demonstrations and keypoints into training data:

bash scripts/data2zarr/metaworld/metaworld_skil.sh

⚠️ Make sure to update the task_lst in this script if using tasks other than hammer.

4. Train SKIL Policy

Execute the training script:

bash scripts/train/train_skil.sh

You can modify key training parameters directly in the script:

  • seed=0 – Random seed

  • gpu_id=0,6,7 – GPU IDs to use

  • num_epochs=1000 – Number of training epochs

  • benchmark="metaworld" – Benchmark name

  • env="hammer" – Task/environment name

🤖 Real Robot

Our real-world robot experiments are built on top of the hardware setup provided by the DROID project. For data collection and policy evaluation, we closely follow the DROID codebase, adapting and extending its tooling where necessary.

In particular, our policy evaluation is implemented by modifying the policy_wrapper.py script from DROID to wrap around the policy classes defined in the Diffusion Policy framework. This integration enables seamless evaluation of our learned policies on real hardware.

If you encounter any issues or have questions about replicating our evaluation setup, feel free to contact us.

<img src="media/real-world.png" alt="drawing" width="100%"/>

🧾 Acknowledgements

Our codebase builds upon several influential works in the imitation learning and robotic manipulation community. In particular, we reference and adapt components from:

We sincerely thank the authors of these projects for their contributions and open-sourcing their code. Their work has been instrumental in the development of this project.

Contact Shengjie Wang if you have any questions or suggestions.

📚 Citation

@article{wang2025skil,
  title = {SKIL: Semantic Keypoint Imitation Learning for Generalizable Data‑efficient Manipulation},
  author = {Wang, Shengjie and You, Jiacheng and Hu, Yihang and Li, Jiongye and Gao, Yang},
  journal = {arXiv preprint arXiv:2501.14400},
  year = {2025}
}

Related Skills

View on GitHub
GitHub Stars25
CategoryEducation
Updated13d ago
Forks1

Languages

Python

Security Score

90/100

Audited on Mar 26, 2026

No findings