SkillAgentSearch skills...

TopoBench

TopoBench is a Python library designed to standardize benchmarking and accelerate research in Topological Deep Learning

Install / Use

/learn @geometric-intelligence/TopoBench

README

<h2 align="center"> <img src="resources/logo.jpg" width="800"> </h2> <h3 align="center"> A Comprehensive Benchmark Suite for Topological Deep Learning </h3> <p align="center"> Assess how your model compares against state-of-the-art topological neural networks. </p> <div align="center">

Lint Test Codecov Docs Python license slack

</div> <p align="center"> <a href="#pushpin-overview">Overview</a> • <a href="#jigsaw-get-started">Get Started</a> • <a href="#anchor-tutorials">Tutorials</a> • <a href="#gear-neural-networks">Neural Networks</a> • <a href="#rocket-liftings-and-transforms">Liftings and Transforms</a> • <a href="#books-datasets">Datasets</a> • <a href="#mag-references">References</a> </p>

🏆 The TAG-DS Topological Deep Learning Challenge 2025 has concluded! A huge shotout to all participants. Check out the winners and honorable mentions on the challenge website.


:pushpin: Overview

TopoBench (TB) is a modular Python library designed to standardize benchmarking and accelerate research in Topological Deep Learning (TDL). In particular, TB allows to train and compare the performances of all sorts of Topological Neural Networks (TNNs) across the different topological domains, where by topological domain we refer to a graph, a simplicial complex, a cellular complex, or a hypergraph. For detailed information, please refer to the TopoBench: A Framework for Benchmarking Topological Deep Learning paper.

<p align="center"> <img src="resources/workflow.jpg" width="700"> </p>

The main pipeline trains and evaluates a wide range of state-of-the-art TNNs and Graph Neural Networks (GNNs) (see <a href="#gear-neural-networks">:gear: Neural Networks</a>) on numerous and varied datasets and benchmark tasks (see <a href="#books-datasets">:books: Datasets</a> ). Additionally, the library offers the ability to transform, i.e. lift, each dataset from one topological domain to another (see <a href="#rocket-liftings-and-transforms">:rocket: Liftings and Transforms</a>), enabling for the first time an exhaustive inter-domain comparison of TNNs.

:jigsaw: Get Started

🚀 Quick Install (Recommended)

TopoBench now uses uv, an extremely fast Python package manager and resolver. This allows for nearly instantaneous environment setup and reproducible builds.

  1. Install uv

  2. Clone and Navigate:

    git clone git@github.com:geometric-intelligence/topobench.git
    cd TopoBench
    
  3. Initialize Environment: Use our centralized setup script to handle Python 3.11 virtualization and specialized hardware (CUDA) mapping.

    # Usage: source uv_env_setup.sh [cpu|cu118|cu121]
    source uv_env_setup.sh cpu
    

    This script performs the following:

    • Creates a .venv using Python 3.11.
    • Dynamically configures pyproject.toml to point to the correct PyTorch and PyG (PyTorch Geometric) wheels for your platform.
    • Generates a precise uv.lock file and syncs all dependencies.

🛠️ Manual Environment Setup

If you prefer to manage the environment manually or are integrating into an existing workflow:

# Create a virtual environment with strict versioning
uv venv --python 3.11
source .venv/bin/activate

# Sync dependencies including all extras (dev, test, and doc)
uv sync --all-extras

🚄 Run Training Pipeline Once the environment is active, you can launch the TopoBench pipeline:

# Using the activated virtual environment
python -m topobench 

# Or execute directly via uv without manual activation
uv run python -m topobench

✅ Verify Installation You can verify that the correct versions of Torch and CUDA are detected by running:

python -c "import torch; print(f'Torch: {torch.__version__} | CUDA: {torch.version.cuda}')"

Customizing Experiment Configuration

Thanks to hydra implementation, one can easily override the default experiment configuration through the command line. For instance, the model and dataset can be selected as:

python -m topobench model=cell/cwn dataset=graph/MUTAG

Remark: By default, our pipeline identifies the source and destination topological domains, and applies a default lifting between them if required.

Transforms allow you to modify your data before processing. There are two main ways to configure transforms: individual transforms and transform groups.

<details> <summary><strong>Configuring Individual Transforms</strong></summary>

When configuring a single transform, follow these steps:

  1. Choose a desired transform (e.g., a lifting transform).
  2. Identify the relative path to the transform configuration.

The folder structure for transforms is as follows:

├── configs
│ ├── data_manipulations
│ ├── transforms
│ │ └── liftings
│ │   ├── graph2cell
│ │   ├── graph2hypergraph
│ │   └── graph2simplicial

To override the default transform, use the following command structure:

python -m topobench model=<model_type>/<model_name> dataset=<data_type>/<dataset_name> transforms=[<transform_path>/<transform_name>]

For example, to use the discrete_configuration_complex lifting with the cell/cwn model:

python -m topobench model=cell/cwn dataset=graph/MUTAG transforms=[liftings/graph2cell/discrete_configuration_complex]
</details> <details> <summary><strong>Configuring Transform Groups</strong></summary>

For more complex scenarios, such as combining multiple data manipulations, use transform groups:

  1. Create a new configuration file in the configs/transforms directory (e.g., custom_example.yaml).
  2. Define the transform group in the YAML file:
defaults:
- data_manipulations@data_transform_1: identity
- data_manipulations@data_transform_2: node_degrees
- data_manipulations@data_transform_3: one_hot_node_degree_features
- liftings/graph2cell@graph2cell_lifting: cycle

Important: When composing multiple data manipulations, use the @ operator to assign unique names to each transform.

  1. Run the experiment with the custom transform group:
python -m topobench model=cell/cwn dataset=graph/ZINC transforms=custom_example

This approach allows you to create complex transform pipelines, including multiple data manipulations and liftings, in a single configuration file.

</details> By mastering these configuration options, you can easily customize your experiments to suit your specific needs, from simple model and dataset selections to complex data transformation pipelines. ---

Additional Notes

  • Automatic Lifting: By default, our pipeline identifies the source and destination topological domains and applies a default lifting between them if required.
  • Fine-Grained Configuration: The same CLI override mechanism applies when modifying finer configurations within a CONFIG GROUP.
    Please refer to the official hydra documentation for further details.

:bike: Experiments Reproducibility

To reproduce Table 1 from the TopoBench: A Framework for Benchmarking Topological Deep Learning paper, please run the following command:

bash scripts/reproduce.sh

Remark: We have additionally provided a public W&B (Weights & Biases) project with logs for the corresponding runs (updated on June 11, 2024).

:anchor: Tutorials

Explore our tutorials for further details on how to add new datasets, transforms/liftings, and benchmark tasks.

:gear: Neural Networks

We list the neural networks trained and evaluated by TopoBench, organized by the topological domain over which they operate: graph, simplicial complex, cellular complex or hypergraph. Many of these neural networks were originally implemented in TopoModelX.

Pointclouds

| Model | Reference | | --- | --- | | DeepSets | Deep Sets |

Graphs

| Model | Reference | | --- | --- | | GAT | Graph Attention Networks | | GIN | How Powerful are Graph Neural Networks? | | GCN | Semi-Supervised Classification with Graph Convolutional Networks | | GraphMLP | Graph-MLP: Node Classification without Message Passing in Graph | | GPS | Recipe for a General, Powerful, Scalable Graph Transformer |

Simplicial Complexes

| Model | Ref

Related Skills

View on GitHub
GitHub Stars232
CategoryDesign
Updated3d ago
Forks76

Languages

Python

Security Score

100/100

Audited on Mar 29, 2026

No findings