SkillAgentSearch skills...

Lagrangebench

LagrangeBench: A Lagrangian Fluid Mechanics Benchmarking Suite

Install / Use

/learn @tumaer/Lagrangebench

README

<div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="docs/lagrangebench_logo.svg"> <source media="(prefers-color-scheme: light)" srcset="docs/lagrangebench_logo.svg"> <img alt="LagrangeBench Logo: Lagrangian Fluid Mechanics Benchmarking Suite" src="docs/lagrangebench_logo.svg" width=550pt> </picture> <!-- [![Static Badge](https://img.shields.io/badge/docs-red?style=for-the-badge&logo=readthedocs)](https://lagrangebench.readthedocs.io/en/latest/index.html) [![Static Badge](https://img.shields.io/badge/arxiv-blue?style=for-the-badge&logo=arxiv)](https://arxiv.org/abs/2309.16342) [![Ruff Linting](https://github.com/tumaer/lagrangebench/actions/workflows/ruff.yml/badge.svg)](https://github.com/tumaer/lagrangebench/actions/workflows/ruff.yml) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) [![Downloads](https://static.pepy.tech/badge/lagrangebench)](https://pypi.org/project/lagrangebench/) [![Python Versions](https://img.shields.io/pypi/pyversions/lagrangebench)](https://pypi.org/project/lagrangebench/) -->

Paper Docs PyPI - Version Open In Colab Discord

Tests CodeCov License

</div>

NeurIPS page with video and slides here.

Table of Contents

  1. Installation
  2. Usage
  3. Datasets
  4. Pretrained Models
  5. Directory Structure
  6. Contributing
  7. Citation

Installation

Standalone library

Install the core lagrangebench library from PyPi as

python3.10 -m venv venv
source venv/bin/activate
pip install lagrangebench --extra-index-url=https://download.pytorch.org/whl/cpu

Note that by default lagrangebench is installed without JAX GPU support. For that follow the instructions in the GPU support section.

Clone

Clone this GitHub repository

git clone https://github.com/tumaer/lagrangebench.git
cd lagrangebench

Install the dependencies with Poetry (>=1.6.0)

poetry install --only main

Alternatively, a requirements file is provided. It directly installs the CUDA version of JAX.

pip install -r requirements_cuda.txt

For a CPU version of the requirements file, one could use docs/requirements.txt.

GPU support

To run JAX on GPU, follow Installing JAX, or in general run

pip install -U "jax[cuda12]==0.4.29"

Note: as of 27.06.2024, to make our GNN models deterministic on GPUs, you need to set os.environ["XLA_FLAGS"] = "--xla_gpu_deterministic_ops=true". However, all current models rely of scatter_sum, and this operation seems to be slower than running a normal for-loop in Python, when executed in deterministic mode, see #17844 and #10674.

MacOS

Currently, only the CPU installation works. You will need to change a few small things to get it going:

  • Clone installation: in pyproject.toml change the torch version from 2.1.0+cpu to 2.1.0. Then, remove the poetry.lock file and run poetry install --only main.
  • Configs: You will need to set dtype=float32 and train.num_workers=0.

Although the current jax-metal==0.0.5 library supports jax in general, there seems to be a missing feature used by jax-md related to padding -> see this issue.

Usage

Standalone benchmark library

A general tutorial is provided in the example notebook "Training GNS on the 2D Taylor Green Vortex" under ./notebooks/tutorial.ipynb on the LagrangeBench repository. The notebook covers the basics of LagrangeBench, such as loading a dataset, setting up a case, training a model from scratch and evaluating its performance.

Running in a local clone (main.py)

Alternatively, experiments can also be set up with main.py, based on extensive YAML config files and cli arguments (check configs/). By default, the arguments have priority as 1) passed cli arguments, 2) YAML config and 3) defaults.py (lagrangebench defaults).

When loading a saved model with load_ckp the config from the checkpoint is automatically loaded and training is restarted. For more details check the runner.py file.

Train

For example, to start a GNS run from scratch on the RPF 2D dataset use

python main.py config=configs/rpf_2d/gns.yaml

Some model presets can be found in ./configs/.

If mode=all is provided, then training (mode=train) and subsequent inference (mode=infer) on the test split will be run in one go.

Restart training

To restart training from the last checkpoint in load_ckp use

python main.py load_ckp=ckp/gns_rpf2d_yyyymmdd-hhmmss

Inference

To evaluate a trained model from load_ckp on the test split (test=True) use

python main.py load_ckp=ckp/gns_rpf2d_yyyymmdd-hhmmss/best rollout_dir=rollout/gns_rpf2d_yyyymmdd-hhmmss/best mode=infer test=True

If the default eval.infer.out_type=pkl is active, then the generated trajectories and a metricsYYYY_MM_DD_HH_MM_SS.pkl file will be written to eval.rollout_dir. The metrics file contains all eval.infer.metrics properties for each generated rollout.

Notebooks

We provide three notebooks that show LagrangeBench functionalities, namely:

Datasets

The datasets are hosted on Zenodo under the DOI: 10.5281/zenodo.10021925. If a dataset is not found in dataset.src, the data is automatically downloaded. Alternatively, to manually download the datasets use the download_data.sh shell script, either with a specific dataset name or "all". Namely

  • Taylor Green Vortex 2D: bash download_data.sh tgv_2d datasets/
  • Reverse Poiseuille Flow 2D: bash download_data.sh rpf_2d datasets/
  • Lid Driven Cavity 2D: bash download_data.sh ldc_2d datasets/
  • Dam break 2D: bash download_data.sh dam_2d datasets/
  • Taylor Green Vortex 3D: bash download_data.sh tgv_3d datasets/
  • Reverse Poiseuille Flow 3D: bash download_data.sh rpf_3d datasets/
  • Lid Driven Cavity 3D: bash download_data.sh ldc_3d datasets/
  • All: bash download_data.sh all datasets/

Pretrained Models

We provide pretrained model weights of our default GNS and SEGNN models on each of the 7 LagrangeBench datasets. You can download and run the checkpoints given below. In the table, we also provide the 20-step error measures on the full test split.

| Dataset | Model | MSE<sub>20</sub> | Sinkhorn | MSE<sub>E<sub>kin</sub></sub> | | ------- |-------------------------------------------------------------------------------------- | ------ | ------ | ------ | | 2D TGV | GNS-10-128 | 5.9e-6 | 3.2e-7 | 4.9e-7 | | | SEGNN-10-64 | 4.4e-6 | 2.1e-7 | 5.0e-7 | | 2D RPF | GNS-10-128 | 4.0e-6 | 2.5e-7 | 2.7e-5 | | | SEGNN-10-64 | 3.4e-6 | 2.5e-7 | 1.4e-5 | | 2D LDC | GNS-10-128 | 1.5e-5 | 1.1e-6 | 6.1e-7 | | | SEGNN-10-64 | 2.1e-5 | 3.7e-6 | 1.6e-5 | | 2D DAM | GNS-10-128 | 3.1e-5 | 1.4e-5 | 1.1e-4 | | | [SEGNN-10-64](https://drive.google.com/file/d/1

Related Skills

View on GitHub
GitHub Stars72
CategoryDevelopment
Updated27d ago
Forks9

Languages

Jupyter Notebook

Security Score

100/100

Audited on Mar 6, 2026

No findings