NanoDiffusion
The simplest diffusion model in PyTorch, with Apple M chip acceleration support.
Install / Use
/learn @cgao96/NanoDiffusionREADME
nanoDiffusion
The simplest diffusion model in PyTorch, with Apple M chip acceleration support.
Training a decent model on MNIST only takes 10~30 minutes on a MacBook!
Supported sampler:
- DDPM ("Denoising Diffusion Probabilistic Models")
- DDIM ("Denoising Diffusion Implicit Models")
Quick start
First we need to download the MNIST dataset. Simply run
python data.py
and you will get the compressed MNIST dataset downloaded in data folder.
After getting the training data, we can check how the noise adding process works by running the following command:
python sampler.py
Note that the noise-adding process is sampler-invariant. Here's an example image:

You can train the diffusion model (a small UNet) and generate new images by running the following command:
python main.py
You can set train_model = False to skip the training process by loading the model checkpoint. Below are some examples generated by different sampler.
DDPM, 100 epoch training, 1000 sample steps

DDIM, 100 epoch training, 1000 DDPM sample steps with 100 DDIM sample steps, 0 eta

Acknowledgements
I started building the pipeline with examples in SingleZombie/DL-Demos. Thanks Yifan!
Related Skills
openhue
353.1kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
353.1kElevenLabs text-to-speech with mac-style say UX.
weather
353.1kGet current weather and forecasts via wttr.in or Open-Meteo
tweakcc
1.6kCustomize Claude Code's system prompts, create custom toolsets, input pattern highlighters, themes/thinking verbs/spinners, customize input box & user message styling, support AGENTS.md, unlock private/unreleased features, and much more. Supports both native/npm installs on all platforms.
