FLAMEGPU2
FLAME GPU 2 is a GPU accelerated agent based modelling framework for CUDA C++ and Python
Install / Use
/learn @FLAMEGPU/FLAMEGPU2README
FLAME GPU 2
FLAME GPU is a GPU accelerated agent-based simulation library for domain independent complex systems simulations. Version 2 is a complete re-write of the existing library offering greater flexibility, an improved interface for agent scripting and better research software engineering, with CUDA/C++ and Python interfaces.
FLAME GPU provides a mapping between a formal agent specifications with C++ based scripting and optimised CUDA code. This includes a number of key Agent-based Modelling (ABM) building blocks such as multiple agent types, agent communication and birth and death allocation.
- Agent-based (AB) modellers are able to focus on specifying agent behaviour and run simulations without explicit understanding of CUDA programming or GPU optimisation strategies.
- Simulation performance is significantly increased in comparison with CPU alternatives. This allows simulation of far larger model sizes with high performance at a fraction of the cost of grid based alternatives.
- Massive agent populations can be visualised in real time as agent data is already located on the GPU hardware.
Project Status
<!-- Remove this section once it is no longer in pre-release / alpha state -->FLAME GPU 2 is currently in an pre-release (release candidate) state, and although we hope there will not be significant changes to the API prior to a stable release there may be breaking changes as we fix issues, adjust the API and improve performance. The use of native Python agent functions (agent functions expressed as Python syntax which are transpiled to C++) is currently supported (see examples) but classed as an experimental feature.
If you encounter issues while using FLAME GPU, please provide bug reports, feedback or ask questions via GitHub Issues and Discussions.
Documentation and Support
Installation
Pre-compiled python wheels are available for installation from Releases, and can also be installed via pip via whl.flamegpu.com. Wheels are not currently manylinux compliant. Please see the latest release for more information on the available wheels and installation instructions.
C++/CUDA installation is not currently available. Please refer to the section on Building FLAME GPU.
Creating your own FLAME GPU Model
Template repositories are provided as a simple starting point for your own FLAME GPU models, with separate template repositories for the CUDA C++ and Python interfaces. See the template repositories for further information on their use.
- CUDA C++: FLAME GPU 2 example template project
- Python3: FLAME GPU 2 python example template project
Building FLAME GPU
FLAME GPU 2 uses CMake, as a cross-platform process, for configuring and generating build directives, e.g. Makefile or .vcxproj.
This is used to build the FLAMEGPU2 library, examples, tests and documentation.
Requirements
Building FLAME GPU has the following requirements. There are also optional dependencies which are required for some components, such as Documentation or Python bindings.
- CMake
>= 3.25.2 - CUDA
>= 12.0(Linux) or>= 12.4(Windows)- FLAME GPU aims to support the 2 most recent major CUDA versions, currently
12and13. - For native Windows builds, CUDA
12.0-12.3may work for some but not all parts of FLAME GPU due to c++20 compilation issues and MSVC support. - A Compute Capability
>= 5.0(CUDA 12.x) or>= 7.5(CUDA 13.x) NVIDIA GPU is required for execution.
- FLAME GPU aims to support the 2 most recent major CUDA versions, currently
- C++20 capable C++ compiler (host), compatible with the installed CUDA version
- Microsoft Visual Studio 2022 (Windows)
- Note: Visual Studio must be installed before the CUDA toolkit is installed. See the CUDA installation guide for Windows for more information.
- Note: Windows 11 SDK (10.0.22000.0) component is required within the Visual Studio (in latest versions this is default for C++ Desktop Development workloads even even on Windows 10). Windows 10 must be updated to build 19045 (22H2) or later to support this at runtime.
- make and GCC
>= 10(Linux)
- Microsoft Visual Studio 2022 (Windows)
- git
Optionally:
- cpplint for linting code
- Doxygen to build the documentation
- Python
>= 3.10for python integration- With
setuptools,wheel,buildand optionallyvenvpython packages installed - On Windows, CUDA >= 12.4 is required for python integration
- With
- swig
>= 4.1.0for python integration (with c++20 support)- Swig >=
4.1.0will be automatically downloaded by CMake if not provided (if possible). - Swig
4.2.0and4.2.1is known to encounter issues in some cases. Consider using an alternate SWIG version
- Swig >=
- MPI (e.g. MPICH, OpenMPI) for distributed ensemble support
- MPI 3.0+ tested, older MPIs may work but not tested.
- FLAMEGPU2-visualiser dependencies
Building with CMake
Building via CMake is a three step process, with slight differences depending on your platform.
- Create a build directory for an out-of tree build
- Configure CMake into the build directory
- Using the CMake GUI or CLI tools
- Specifying build options such as the CUDA Compute Capabilities to target, the inclusion of Visualisation or Python components, or performance impacting features such as
FLAMEGPU_SEATBELTS. See CMake Configuration Options for details of the available configuration options - CMake will automatically find and select compilers, libraries and python interpreters based on current environmental variables and default locations. See Mastering CMake for more information.
- Python dependencies must be installed in the selected python environment. If needed you can instruct CMake to use a specific python implementation using the
Python_ROOT_DIRandPython_ExecutableCMake options at configure time.
- Python dependencies must be installed in the selected python environment. If needed you can instruct CMake to use a specific python implementation using the
- Build compilation targets using the configured build system
- See Available Targets for a list of available targets.
Linux
To build under Linux using the command line, you can perform the following steps.
For example, to configure CMake for Release builds, for consumer Pascal GPUs (Compute Capability 61), with python bindings enabled, producing the static library and boids_bruteforce example binary.
# Create the build directory and change into it
mkdir -p build && cd build
# Configure CMake from the command line passing configure-time options.
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CUDA_ARCHITECTURES=61 -DFLAMEGPU_BUILD_PYTHON=ON
# Build the required targets. In this case all targets
cmake --build . --target flamegpu boids_bruteforce -j 8
# Alternatively make can be invoked directly
make flamegpu boids_bruteforce -j8
Windows
Under Windows, you must instruct CMake on which Visual Studio and architecture to build for, using the CMake -A and -G options.
This can be done through the GUI or the CLI.
I.e. to configure CMake for consumer Pascal GPUs (Compute Capability 61), with python bindings enabled, and build the producing the static library and boids_bruteforce example binary in the Release configuration:
REM Create the build directory
mkdir build
cd build
REM Configure CMake from the command line, specifying the -A and -G options. Alternatively use the GUI
cmake .. -A x64 -G "Visual Studio 16 2019" -DCMAKE_CUDA_ARCHITECTURES=61 -DFLAMEGPU_BUILD_PYTHON=ON
REM You can then open Visual Studio manually from the .sln file, or via:
cmake --open .
REM Alternatively, build from the command line specifying the build configuration
cm
