SkillAgentSearch skills...

TinyGPUlang

Tutorial on building a gpu compiler backend in LLVM

Install / Use

/learn @adamtiger/TinyGPUlang
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

tinyGPUlang

Tutorial on building a gpu compiler backend in LLVM

Goals

The goal of this tutorial is to show a simple example on how to generate ptx from the llvm ir and how to write the IR itself to access cuda features.

For the sake of demonstration a language frontend is also provided. The main idea of the language is to support pointwise (aka elementwise) operations with gpu acceleration.

If you are just curios about the code generation backend, you can jump directly to The code generator for NVPTX backend part.

What is inside the repo?

  • tinyGPUlang: the compiler, creates ptx from tgl (the example language file)
  • test: a cuda driver api based test for the generated ptx
  • examples: example tgl files
  • docs: documentation for the tutorial

Tutorial content

  1. Overview
  2. The TGL language
  3. Abstract Syntax Tree
  4. The code generator for NVPTX backend
  5. Short overview of the parser

Build

See the How to build the project? documentation for further details.

References

Related Skills

View on GitHub
GitHub Stars55
CategoryDevelopment
Updated1mo ago
Forks11

Languages

C++

Security Score

100/100

Audited on Feb 22, 2026

No findings