SkillAgentSearch skills...

Astronomix

differentiable (magneto)hydrodynamics for astrophysics in JAX

Install / Use

/learn @leo1200/Astronomix

README

astronomix - differentiable mhd for astrophysics in JAX

Project Status: Active – The project has reached a stable, usable state and is being actively developed. DOI

astronomix (formerly jf1uids) is a differentiable hydrodynamics and magnetohydrodynamics code written in JAX with a focus on astrophysical applications. astronomix is easy to use, well-suited for fast method development, scales to multiple GPUs and its differentiability opens the door for gradient-based inverse modeling and sampling as well as surrogate / solver-in-the-loop training.

Features

  • [x] 1D, 2D and 3D hydrodynamics and magnetohydrodynamics simulations scaling to multiple GPUs
  • [x] a 5th order finite difference constrained transport WENO MHD scheme following HOW-MHD by Seo & Ryu 2023 as well as the provably divergence free and provably positivity preserving finite volume approach of Pang and Wu (2024) (the WENO scheme is also available standalone for hydrodynamics)
  • [x] for finite volume simulations the basic Lax-Friedrichs, HLL and HLLC Riemann solvers as well as the HLLC-LM (Fleischmann et al., 2020) and HYBRID-HLLC & AM-HLLC (Hu et al., 2025) (sequels to HLLC-LM) variants
  • [x] novel (possibly) conservative self gravity scheme, with improved stability at strong discontinuities
  • [x] spherically symmetric simulations such that mass and energy are conserved based on the scheme of Crittenden and Balachandar (2018)
  • [x] backwards and forwards differentiable with adaptive timestepping
  • [x] turbulent driving, simple stellar wind, simple radiative cooling modules
  • [x] easily extensible, all code is open source

Contents

Installation

astronomix can be installed via pip

pip install astronomix

Note that if JAX is not yet installed, only the CPU version of JAX will be installed as a dependency. For a GPU-compatible installation of JAX, please refer to the JAX installation guide.

Hello World! Your first astronomix simulation

Below is a minimal example of a 1D hydrodynamics shock tube simulation using astronomix.

import jax.numpy as jnp
from astronomix import (
    SimulationConfig, SimulationParams,
    get_helper_data, finalize_config,
    get_registered_variables, construct_primitive_state,
    time_integration
)

# the SimulationConfig holds static 
# configuration parameters
config = SimulationConfig(
    box_size = 1.0,
    num_cells = 101,
    progress_bar = True
)

# the SimulationParams can be changed
# without causing re-compilation
params = SimulationParams(
    t_end = 0.2,
)

# the variable registry allows for the principled
# access of simulation variables from the state array
registered_variables = get_registered_variables(config)

# next we set up the initial state using the helper data
helper_data = get_helper_data(config)
shock_pos = 0.5
r = helper_data.geometric_centers
rho = jnp.where(r < shock_pos, 1.0, 0.125)
u = jnp.zeros_like(r)
p = jnp.where(r < shock_pos, 1.0, 0.1)

# get initial state
initial_state = construct_primitive_state(
    config = config,
    registered_variables = registered_variables,
    density = rho,
    velocity_x = u,
    gas_pressure = p,
)

# finalize and check the config
config = finalize_config(config, initial_state.shape)

# now we run the simulation
final_state = time_integration(initial_state, config, params, registered_variables)

# the final_state holds the final primitive state, the 
# variables can be accessed via the registered_variables
rho_final = final_state[registered_variables.density_index]
u_final = final_state[registered_variables.velocity_index]
p_final = final_state[registered_variables.pressure_index]

You've just run your first astronomix simulation! You can continue with the notebooks below and we have also prepared a more advanced use-case (stellar wind in driven MHD tubulence) which you can Open In Colab.

Notebooks for Getting Started

Showcase

| wind in driven turbulence | |:---------------------------------------------------------------------------------:| | Magnetohydrodynamics simulation with driven turbulence at a resolution of 512³ cells in a fifth order CT MHD scheme run on 4 H200 GPUs. |

| wind in driven turbulence | |:---------------------------------------------------------------------------------:| | Magnetohydrodynamics simulation with driven turbulence and stellar wind at a resolution of 512³ cells in a fifth order CT MHD scheme run on 4 H200 GPUs. |

| Orszag-Tang Vortex | 3D Collapse | |:------------------------------------------------------------------:|:-------------------------------------------------:| | Orszag-Tang Vortex | 3D Collapse |

| Gradients Through Stellar Wind | |:---------------------------------------------------------------------------------------:| | Gradients Through Stellar Wind |

| Novel (Possibly) Conservative Self Gravity Scheme, Stable at Strong Discontinuities | |:-----------------------------------------------------------------------------------------------------------------------------------:| | Novel (Possibly) Conservative Self Gravity Scheme, Stable at Strong Discontinuities |

| Wind Parameter Optimization | |:---------------------------------------------------------------------------------:| | Wind Parameter Optimization |

Scaling tests

5th order finite difference vs 2nd order finite volume MHD schemes

Our first scaling tests cover the two MHD schemes implemented in astronomix: the 2nd order finite volume (fv_mhd) scheme and the 5th order finite difference (fd_mhd) scheme.

The following results were obtained on a single NVIDIA H200 GPU, the test run at different resoltions was an MHD blast wave test (see the code).

| scaling comparison | |:---------------------------------------------------------------------------------:| | Runtime benchmarking of the fv_mhd and fd_mhd schemes on a single NVIDIA H200 GPU. |

The finite volume scheme is roughly an order of magnitude faster at the same resolution.

But considering the accuracy per computational cost, where we take the 512³ fd_mhd simulation as the reference, the 5th order finite difference scheme is more efficient.

| accuracy vs cost | |:---------------------------------------------------------------------------------:| | Accuracy versus computational cost for the fv_mhd and fd_mhd schemes on a single NVIDIA H200 GPU. |

The finite difference schemes achieves higher accuracy with less computation time.

Multi-GPU scaling

We have tested the multi-GPU scaling of the 5th order finite difference MHD scheme, comparing the runtime of the same simulation on 1 and 4 NVIDIA H200 GPUs (strong scaling).

| multi gpu scaling | |:---------------------------------------------------------------------------------:| | Multi-GPU scaling of the 5th order finite difference MHD scheme on 4 NVIDIA H200 GPUs. |

We reach speedups of up to ~3.5x at 512³ resolution on 4 GPUs compared to a single GPU run. At higher resolutions we would expect to eventually reach perfect scaling. The lower speedup at 600³ cells in our test might be due to

Related Skills

View on GitHub
GitHub Stars60
CategoryDevelopment
Updated4h ago
Forks6

Languages

Jupyter Notebook

Security Score

100/100

Audited on Apr 7, 2026

No findings