SkillAgentSearch skills...

Gpudrive

1 million FPS multi-agent driving simulator

Install / Use

/learn @Emerge-Lab/Gpudrive
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

GPUDrive

Paper GitHub CI License Python version

An extremely fast, data-driven driving simulator written in C++.

Highlights

  • ⚡️ Fast simulation for agent development and evaluation at 1 million FPS through the Madrona engine.
  • 🐍 Provides Python bindings and gymnasium wrappers in torch and jax.
  • 🏃‍➡️ Compatible with the Waymo Open Motion Dataset, featuring over 100K scenarios with human demonstrations.
  • 📜 Readily available PPO implementations via SB3 and CleanRL / Pufferlib.
  • 👀 Easily configure the simulator and agent views.
  • 🎨 Diverse agent types: Vehicles, cyclists and pedestrians.
<div align="center">

| Simulator state | Agent observation | | ---------------------------------------------------------------- | ---------------------------------------------------------------- | | <img src="assets/sim_video_7.gif" width="320px"> | <img src="assets/obs_video_7.gif" width="320px"> | | <img src="assets/sim_video_0_10.gif" width="320px"> | <img src="assets/obs_video_0_10.gif" width="320px"> |

</div>

For details, see our paper and the introduction tutorials, which guide you through the basic usage.

Installation

To build GPUDrive, ensure you have all the required dependencies listed here including CMake, Python, and the CUDA Toolkit. See the details below.

<details> <summary>Dependencies</summary>
  • CMake >= 3.24
  • Python >= 3.11
  • CUDA Toolkit >= 12.2 and <= 12.4 (We do not support CUDA versions 12.5+ at this time. Verify your CUDA version using nvcc --version.)
  • On macOS and Windows, install the required dependencies for XCode and Visual Studio C++ tools, respectively.
</details>

After installing the necessary dependencies, clone the repository (don't forget the --recursive flag!):

git clone --recursive https://github.com/Emerge-Lab/gpudrive.git
cd gpudrive

Then, there are two options for building the simulator:


<details> <summary>🔧 Option 1. Manual install </summary>

For Linux and macOS, use the following commands:

mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j # cores to build with, e.g. 32
cd ..

For Windows, open the cloned repository in Visual Studio and build the project using the integrated cmake functionality.

Next, set up a Python environment

With uv (Recommended)

Create a virtual environment and install the Python components of the repository:

uv sync --frozen

With pyenv

Create a virtual environment:

pyenv virtualenv 3.11 gpudrive
pyenv activate gpudrive

Set it for the current project directory (optional):

pyenv local gpudrive

With conda

conda env create -f ./environment.yml
conda activate gpudrive

Install Python package

Finally, install the Python components of the repository using pip (this step is not required for the uv installation):

# macOS and Linux.
pip install -e .

Dependency-groups include pufferlib, sb3, vbd, and tests.

# On Windows.
pip install -e . -Cpackages.madrona_escape_room.ext-out-dir=<PATH_TO_YOUR_BUILD_DIR on Windows>
</details>

<details> <summary> 🐳 Option 2. Docker </summary>

To get started quickly, we provide a Dockerfile in the root directory.

Prerequisites

Ensure you have the following installed:

Building the Docker mage

Once installed, you can build the container with:

DOCKER_BUILDKIT=1 docker build --build-arg USE_CUDA=true --tag gpudrive:latest --progress=plain .

Running the Container

To run the container with GPU support and shared memory:

docker run --gpus all -it --rm --shm-size=20G -v ${PWD}:/workspace gpudrive:latest /bin/bash
</details>

Test whether the installation was successful by importing the simulator:

import madrona_gpudrive

To avoid compiling on GPU mode everytime, the following environment variable can be set with any custom path. For example, you can store the compiled program in a cache called gpudrive_cache:

export MADRONA_MWGPU_KERNEL_CACHE=./gpudrive_cache

Please remember that if you make any changes in C++, you need to delete the cache and recompile.


<details> <summary>Optional: If you want to use the Madrona viewer in C++</summary>

Extra dependencies to use Madrona viewer

To build the simulator with visualization support on Linux (build/viewer), you will need to install X11 and OpenGL development libraries. Equivalent dependencies are already installed by Xcode on macOS. For example, on Ubuntu:

  sudo apt install libx11-dev libxrandr-dev libxinerama-dev libxcursor-dev libxi-dev mesa-common-dev libc++1
</details>

Integrations

| What | Info | Run | Training SPS | | ------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------- | ------------------------------ | | IPPO implementation SB3 | IPPO, PufferLib, Implementation | python baselines/ppo/ppo_sb3.py | 25 - 50K | | IPPO implementation PufferLib 🐡 | PPO | python baselines/ppo/ppo_pufferlib.py | 100 - 300K |

Getting started

To get started, see these entry points:

  • Our intro tutorials. These tutorials take approximately 30-60 minutes to complete and will guide you through the dataset, simulator, and how to populate the simulator with different types of actors.
  • The environment docs provide detailed info on environment settings and supported features.
<!-- <p align="center"> <img src="assets/GPUDrive_docs_flow.png" width="1300" title="Getting started"> </p> --> <!-- ## 📈 Tests To further test the setup, you can run the pytests in the root directory: ```bash pytest ``` To test if the simulator compiled correctly (and python lib did not), try running the headless program from the build directory. ```bash cd build ./headless CPU 1 # Run on CPU, 1 step ``` -->

Pre-trained policies

Several pre-trained policies are available via the PyTorchModelHubMixin class on 🤗 huggingface_hub.


Note: These models were trained with the environment configurations defined in examples/experimental/config/reliable_agents_params.yaml, changing environment/observation configurations will affect performance.


Usage

To load a pre-trained policy, use the following:

from gpudrive.networks.late_fusion import NeuralNet

# Load pre-trained model via huggingface_hub
agent = NeuralNet.from_pretrained("daphne-cornelisse/policy_S10_000_02_27")

See tutorial 04 for all the details.

Dataset

Download the dataset

  • Two versions of the dataset are available, a mini version with a 1000 training files and 300 test/validation files, and a large dataset with 100k unique scenes.
  • Replace 'GPUDrive_mini' with 'GPUDrive' below if you wish to download the full dataset.
<details> <summary>Download the dataset</summary>

To download the dataset you need the huggingface_hub library

pip install huggingface_hub

Then you can download the dataset using python or just huggingface-cli.

  • Option 1: Using Python
>>> from hug
View on GitHub
GitHub Stars588
CategoryDevelopment
Updated7d ago
Forks84

Languages

Jupyter Notebook

Security Score

95/100

Audited on Mar 30, 2026

No findings