SkillAgentSearch skills...

ProG

A Unified Python Library for Graph Prompting

Install / Use

/learn @sheldonresearch/ProG
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<!-- PROJECT LOGO --> <br /> <div align="center"> <a href="https://github.com/sheldonresearch/ProG"> <img height="150" src="Logo.jpg?sanitize=true" /> </a> </div> <h3 align="center">🌟ProG: A Unified Python Library for Graph Prompting🌟</h3> <div align="center">

| Quick Start | Paper | Media Coverage | Call For Contribution |

Testing Status Testing Status Testing Status

</div>

🌟ProG🌟 (Prompt Graph) is a library built upon PyTorch to easily conduct single or multi-task prompting for pre-trained Graph Neural Networks (GNNs). You can easily use this library to conduct various graph workflows like supervised learning, pre-training and prompting, and pre-training and finetuning for your node/graph-level tasks. The starting point of this library is our KDD23 paper All in One (Best Research Paper Award, which is the first time for Hong Kong and Mainland China).

  • The ori branch of this repository is the source code of All in One, in which you can conduct even more kinds of tasks with more flexible graph prompts.

  • The main branch of this library is the source code of ProG: A Graph Prompt Learning Benchmark, it supports more than 5 graph prompt models (e.g. All-in-One, GPPT, GPF Plus, GPF, GraphPrompt, etc) with more than 6 pre-training strategies (e.g. DGI, GraphMAE, EdgePreGPPT, EdgePreGprompt, GraphCL, SimGRACE, etc), and have been tested on more than 15 graph datasets, covering both homophilic and heterophilic graphs from various domains with different scales. Click here to see the full and latest supportive list (backbones, pre-training strategies, graph prompts, and datasets).

<div align="center">

Click to See A Full List of Our Works in Graph Prompts

</div> <h3 align="left">🌟Acknowledgement</h3> <div align="left"> </div>

Development Progress:

  • ori branch started [JUl 2023]
  • main branch started [JUN 2024]
  • widely testing, debugging and updating [NOW]
  • stable branch started [reaching around 20%]
<br> <div align="left">

</div>
  • 2024/10/24: BIG NEWS! A Detailed Hands-on Blog Coming Soon

    We are now trying our best to prepare a detailed, hands-on blog with deeper insights, troubleshooting, training tricks, and an entirely new perspective for graph prompting (and our ProG project). We just started recently and we plan to finish this hard work by the end of next month. Please wait for a while!

  • 2024/10/15: We released a new work with graph prompts on cross-domain recommendation:

    Hengyu Zhang, Chunxu Shen, Xiangguo Sun, Jie Tan, Yu Rong, Chengzhi Piao, Hong Cheng, Lingling Yi. Adaptive Coordinators and Prompts on Heterogeneous Graphs for Cross-Domain Recommendations. https://arxiv.org/abs/2410.11719

  • 2024/10/03: We present a comprehensive theoretical analysis of graph prompt and release our theory analysis as follows:

    Qunzhong Wang and Xiangguo Sun and Hong Cheng. Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis. https://arxiv.org/abs/2410.01635

  • 2024/09/26: Our Benchmark Paper was accepted by NeurIPS 2024:

    Chenyi Zi, Haihong Zhao, Xiangguo Sun, Yiqing Lin, Hong Cheng, Jia Li. ProG: A Graph Prompt Learning Benchmark. https://arxiv.org/abs/2406.05346

    • (prior news) 2024/06/08: We use our developed ProG to extensively evaluate various graph prompts, and released our analysis report as follows: Chenyi Zi, Haihong Zhao, Xiangguo Sun, Yiqing Lin, Hong Cheng, Jia Li. ProG: A Graph Prompt Learning Benchmark. https://arxiv.org/abs/2406.05346
  • 2024/01/01: A big updated version released!
  • 2023/11/28: We released a comprehensive survey on graph prompt!

    Xiangguo Sun, Jiawen Zhang, Xixi Wu, Hong Cheng, Yun Xiong, Jia Li. Graph Prompt Learning: A Comprehensive Survey and Beyond https://arxiv.org/abs/2311.16534

  • 2023/11/15: We released a 🦀repository🦀 for a comprehensive collection of research papers, datasets, and readily accessible code implementations.
<details close> <summary>History News</summary>
  • 2023/11/15: We released a 🦀repository🦀 for a comprehensive collection of research papers, datasets, and readily accessible code implementations.
</details> <br>

Installation

Pypi

From ProG 1.0 onwards, you can install and use ProG. For this, simply run

pip install prompt-graph

Or you can git clone our repository directly.

Environment Setup

Before you begin, please make sure that you have Anaconda or Miniconda installed on your system. This guide assumes that you have a CUDA-enabled GPU.

# Create and activate a new Conda environment named 'ProG'
conda create -n ProG
conda activate ProG

# Install Pytorch and DGL with CUDA 11.7 support
# If your use a different CUDA version, please refer to the PyTorch and DGL websites for the appropriate versions.
conda install numpy
conda install pytorch==2.0.1 pytorch-cuda=12.2 -c pytorch -c nvidia

# Install additional dependencies
pip install torch_geometric pandas torchmetrics Deprecated 

# If you are having trouble with torch-geometric linked binary version, use conda to build it.

conda install pytorch-sparse -c pyg

In addition, You can use our pre-train GNN directly or use our pretrain module to pre-train the GNN you want by

pip install torch_cluster  -f https://data.pyg.org/whl/torch-2.3.0+cu121.html

the torch and cuda version can refer to https://data.pyg.org/whl/

Quick Start

The Architecture of ProG is shown as follows:

<img height="350" src="/ProG_pipeline.jpg?sanitize=true" />

Firstly, download from onedrive https://1drv.ms/u/s!ArZGDth_ySjPjkW2n-zsF3_GGvC1?e=rEnBA7 (126MB)to get Experiment.zip. You can unzip to get our dataset pre-trained model which is already pre-trained, and induced graph, sample data in the few-shot setting. (Please make sure the unzipped folder's name is /Experiment. if the download link is unavailable, please drop us an email to let us know(barristanzi666@gmail.com)

Warning! The dataset providers may update dataset itself causing compatibility issues with the pretain models we provided. Reports on datasets (ENZYMES,BZR) have been found.

It is recommended to pretrain your model by yourself.

unzip Experiment.zip

We have provided scripts with hyper-parameter settings to get the experimental results

With Customized Hyperparameters

In downstream task, you can obtain the experimental results by running the parameters you want, for example,

python downstream_task.py --pre_train_model_path './Experiment/pre_trained_model/Cora/Edgepred_Gprompt.GCN.128hidden_dim.pth' --task NodeTask --dataset_name 'Cora' --gnn_type 'GCN' --prompt_type 'GPF-plus' --shot_num 1 --hid_dim 128 --num_layer 2  --lr 0.02 --decay 2e-6 --seed 42 --device 0
python downstream_task.py --pre_train_model_path './Experiment/pre_trained_model/BZR/DGI.GCN.128hidden_dim.pth' --task GraphTask --dataset_name 'BZR' --gnn_type 'GCN' --prompt_type 'All-in-one' --shot_num 1 --hid_dim 128 --num_layer 2  --lr 0.02 --decay 2e-6 --seed 42 --device 1

With Optimal Hyperparameters through Random Search

Perform a random search of hyperparameters for the GCN model on the Cora dataset. (NodeTask)

python bench.py --pre_train_model_path './Experiment/pre_trained_model/Cora/GraphCL.GCN.128hidden_dim.pth' --task NodeTask --dataset_name 'Cora' --gnn_type 'GCN' --prompt_type 'GPF-plus' --shot_num 1 --hid_dim 128 --num_layer 2 --seed 42 --device 0
<details> <summary ><strong>Table of The Following Contents</strong></summary> <ol> <li> <a href="#supportive-list">Supportive List</a> </li> <li> <a href="#pre-train-your-gnn-model">Pre-train your GNN model</a> </li> <li> <a href="#downstream-tasks">Downstream Tasks</a> </li> <li><a href="#datasets">Datasets</a></li> <li><a href="#prompt-class">Prompt Class</a></li> <li><a href="#environment-setup">Environment Setup</a></li> <li><a href="#todo-list">TODO List</a></li> </ol> </details>

with the default few-shot sample

For train and test sample split to reproduce the results in the benchmark, you can unzip node.zip -d './Experiment/sample_data' or do not unzip use the code to split the dataset Automatically

Supportive List

**Supportive graph prompt approaches currently (k

Related Skills

View on GitHub
GitHub Stars583
CategoryEducation
Updated13d ago
Forks73

Languages

Python

Security Score

100/100

Audited on Mar 25, 2026

No findings