SkillAgentSearch skills...

Fluidgym

Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control

Install / Use

/learn @safe-autonomous-systems/Fluidgym
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <a href="./docs/images/logo_lm.png#gh-light-mode-only"> <img src="./docs/source/_static/img/logo_lm.png#gh-light-mode-only" alt="FluidGym Logo" width="50%"/> </a> <a href="./docs/images/logo_dm.png#gh-dark-mode-only"> <img src="./docs/source/_static/img/logo_dm.png#gh-dark-mode-only" alt="FluidGym Logo" width="50%"/> </a> </p> <table style="border-collapse: collapse; border: none;"> <tr> <td style="border: none; padding: 0;"> <img src="docs/build/html/_static/img/gifs/cylinder.gif" style="max-width: 100%; height: auto;" /> </td> <td style="border: none; padding: 0;"> <img src="docs/build/html/_static/img/gifs/rbc.gif" style="max-width: 100%; height: auto;" /> </td> <td style="border: none; padding: 0;"> <img src="docs/build/html/_static/img/gifs/airfoil.gif" style="max-width: 100%; height: auto;" /> </td> <td style="border: none; padding: 0;"> <img src="docs/build/html/_static/img/gifs/tcf.gif" style="max-width: 100%; height: auto;" /> </td> </tr> </table> <div align="center">

PyPI version PyPI - Python Version PyTorch CUDA License Linters

</div> <div align="center"> <h3> <a href="#-installation">Installation</a> | <a href="#-getting-started">Getting Started</a> | <a href="https://safe-autonomous-systems.github.io/fluidgym">Documentation</a> | <a href="https://arxiv.org/abs/2601.15015">Paper</a> | <a href="#-license-&-citation">License & Citation</a> </h3> </div>

Key Features

  • Standalone, GPU-accelerated fluid dynamics implemented fully in PyTorch — no external CFD solvers required.
  • Fully differentiable environments, enabling both reinforcement learning and gradient-based control methods.
  • Gymnasium-like API with seamless integration into common RL frameworks.
  • Standardized benchmarks with fixed train/validation/test splits for fair and reproducible evaluation.
  • Diverse AFC environments (2D & 3D) with multiple difficulty levels, covering different regimes.
  • Single-agent and multi-agent support for centralized and decentralized control.
  • Reference baselines and experiments provided for widely used RL algorithms PPO and SAC.

Models & Data

  • All trained models are publicly available on HuggingFace.
  • Complete training and test datasets with results for all experimental runs are released for transparent comparison and reproducibility via our HuggingFace dataset.

Introducing FluidGym v0.1

We are happy to announce that FluidGym v0.1 comes with many updates and improvements, mainly focusing on more convenient usage and integration with RL frameworks:

  • Unified SARL and MARL interface: Previously, MARL environments implemented public reset_marl() and step_marl() functions. These have been removed and directly integrated with the reset() and step() functions. When creating an environemtn via fluidgym.make(), you can now pass a use_marl=True flag, to enable MARL and use the reset() and step() as before. The only difference is that they now return a batch of observations and rewards. This has also been updated for the integrations with PettingZoo and SB3.
  • Gymnasium spaces: FluidEnv now has action_space and observation_space attributes consistent with gymnasium. Additionally, the previous flattened observations have been replaced by Dict observation space containing indivdual fields, such as as velocity and pressure fields, as indivual keys. Furthermore, the indivual observations are now shaped according to the spatial structure of the sensors, enabling the use of methods that leverage the spatial structure of the domain, e.g. CNNs, equivariant networks, etc.
  • Environment wrappers: Following the new observation spaces, we introduce FluidWrappers, namely FlattenObservation, ObsExtraction, ActionNoise, and SensorNoise. The general wrapper interface enables easy integration of new wrappers as needed.
  • Parallelization: Using the new FluidEnvLike protocol, the ParallelFluidEnv can now seamlessly be used with all FluidGym wrappers and integration wrappers. We updated the example to show how you can use FluidGym across multiple GPUs.

Important: The FlattenObservation ensure direct compatiblity with our models on HuggingFace (trained with FluidGym v0.0.2). If you want to use the models, make sure to install the FluidGym v0.0.2 or use the FlattenObservation wrapper. In case you encounter any issues, please report these via an Issue on GitHub. Thank you!


Installation

📦 Installation from PyPi

  1. Ensure the correct PyTorch version is installed (compatible with CUDA 12.8):
pip install torch --index-url https://download.pytorch.org/whl/cu128
  1. Install
pip install fluidgym

🐳 Using Docker

Instead of installing FluidGym you can use one of our Docker containers:

Both containers come with the following Miniconda environments:

  • py310: Python 3.10
  • py311: Python 3.11
  • py312: Python 3.12
  • py313: Python 3.13

Start the containers with:

docker run -it --gpus all fluidgym-runtime bash
docker run -it --gpus all fluidgym-devel bash

🧱 Build from Source

  1. Create a new conda environment and activate it:
conda create -n fluidgym python=3.10
conda activate fluidgym
  1. Install gcc:
conda install pip "gcc_linux-64>=6.0,<=11.5" "gxx_linux-64>=6.0,<=11.5"
  1. Install the latest Pytorch for CUDA 12.8 via pip:
pip install torch --index-url https://download.pytorch.org/whl/cu128
  1. Install the matching cuda toolkit via conda:
conda install cuda-toolkit=12.8 -c nvidia/label/cuda-12.8.1
  1. Clone the repository and enter the directory, then compile the custom CUDA kernels and install the package (this might take several minutes):
make install

Getting Started

For an easy start refer to our documentation and the examples directory. FluidGym provides a gymnasium-like interface that can be used as follows:

import fluidgym

env = fluidgym.make(
    "CylinderJet2D-easy-v0",
)
obs, info = env.reset(seed=42)

for _ in range(50):
    action = env.sample_action()
    obs, reward, term, trunc, info = env.step(action)
    env.render()

    if term or trunc:
        break

License & Citation

This repository is published under the MIT license. If you use FliudGym in your work, please cite us:

@misc{becktepe-fluidgym26,
      title={Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control}, 
      author={Jannis Becktepe and Aleksandra Franz and Nils Thuerey and Sebastian Peitz},
      year={2026},
      eprint={2601.15015},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2601.15015}, 
      note={GitHub: https://github.com/safe-autonomous-systems/fluidgym}, 
}
View on GitHub
GitHub Stars40
CategoryEducation
Updated3d ago
Forks5

Languages

Python

Security Score

95/100

Audited on Mar 30, 2026

No findings