TileGym
Helpful kernel tutorials and examples for tile-based GPU programming
Install / Use
/learn @NVIDIA/TileGymREADME
English | 简体中文 | 繁體中文 | 日本語 | Français
TileGym
TileGym is a CUDA Tile kernel library that provides a rich collection of kernel tutorials and examples for tile-based GPU programming.
Overview | Features | Installation | Quick Start | Contributing | License
Overview
This repository aims to provide helpful kernel tutorials and examples for tile-based GPU programming. TileGym is a playground for experimenting with CUDA Tile, where you can learn how to build efficient GPU kernels and explore their integration into real-world large language models such as Llama 3.1 and DeepSeek V2. Whether you're learning tile-based GPU programming or looking to optimize your LLM implementations, TileGym offers practical examples and comprehensive guidance. <img width="95%" alt="tilegym_1_newyear" src="https://github.com/user-attachments/assets/f37010f5-14bc-44cd-bddf-f517dc9922b8" />
Features
- Rich collection of CUDA Tile kernel examples
- Practical kernel implementations for common deep learning operators
- Performance benchmarking to evaluate kernel efficiency
- End-to-end integration examples with popular LLMs (Llama 3.1, DeepSeek V2)
Installation
Prerequisites
⚠️ Important: TileGym requires CUDA 13.1 and NVIDIA Blackwell architecture GPUs (e.g., B200, RTX 5080, RTX 5090). We will support other GPU architectures in the future. Download CUDA from NVIDIA CUDA Downloads.
- PyTorch (version 2.9.1 or compatible)
- CUDA 13.1 (Required - TileGym is built and tested exclusively on CUDA 13.1)
- Triton (included with PyTorch installation)
Setup Steps
1. Prepare torch and triton environment
If you already have torch and triton, skip this step.
pip install --pre torch --index-url https://download.pytorch.org/whl/cu130
We have verified that torch==2.9.1 works. You can also get triton packages when installing torch.
2. Install TileGym
git clone https://github.com/NVIDIA/TileGym.git
cd TileGym
pip install -r requirements.txt
pip install .
All runtime dependencies are declared in requirements.txt. Running pip install . also installs them automatically, but you can pre-install with pip install -r requirements.txt if you prefer an explicit step.
It will automatically install cuda-tile, see https://github.com/nvidia/cutile-python.
If you want to use edit mode for TileGym, run pip install -e .
We also provide Dockerfile, you can refer to modeling/transformers/README.md.
Quick Start
There are three main ways to use TileGym:
1. Explore Kernel Examples
All kernel implementations are located in the src/tilegym/ops/ directory. You can test individual operations with minimal scripts. Function-level usage and minimal scripts for individual ops are documented in tests/ops/README.md
2. Run Benchmarks
Evaluate kernel performance with micro-benchmarks:
cd tests/benchmark
bash run_all.sh
Complete benchmark guide available in tests/benchmark/README.md
3. Run LLM Transformer Examples
Use TileGym kernels in end-to-end inference scenarios. We provide runnable scripts and instructions for transformer language models (e.g., Llama 3.1-8B) accelerated using TileGym kernels.
First, install the additional dependency:
pip install accelerate==1.13.0 --no-deps
Containerized Setup (Docker):
docker build -t tilegym-transformers -f modeling/transformers/Dockerfile .
docker run --gpus all -it tilegym-transformers bash
More details in modeling/transformers/README.md
4. Julia (cuTile.jl) Kernels (Optional)
TileGym also includes experimental cuTile.jl kernel implementations in Julia. These are self-contained in the julia/ directory and do not require the Python TileGym package.
Prerequisites: Julia 1.12+, CUDA 13.1, Blackwell GPU
# Install Julia (if not already installed)
curl -fsSL https://install.julialang.org | sh
# Install dependencies
julia --project=julia/ -e 'using Pkg; Pkg.instantiate()'
# Run tests
julia --project=julia/ julia/test/runtests.jl
See julia/Project.toml for the full dependency list.
Contributing
We welcome contributions of all kinds. Please read our CONTRIBUTING.md for guidelines, including the Contributor License Agreement (CLA) process.
License and third-party notices
- Project license: MIT
- Third-party attributions and license texts:
Related Skills
node-connect
339.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
339.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.9kCommit, push, and open a PR
