Faff
Drop the faff, dodge the judgment. Another bloody AI commit generator, but this one stays local ๐ฆ
Install / Use
/learn @wimpysworld/FaffREADME
๐ฏ faff
Drop the faff, dodge the judgment, get back to coding.
Stop staring at that staged diff like it owes you money. We all know the drill: you've made brilliant changes, git knows exactly what happened, but translating that into a proper Conventional Commits 1.0.0 message feels like explaining your code to your pets ๐พ faff uses local LLMs via Ollama to automatically generate commit messages from your diffs โ because your changes already tell the story, they just need a translator that speaks developer โ๐งโ๐ป
faff is a productivity tool for the mundane stuff, not a replacement for thoughtful communication.
โจ Why faff?
We've all been there: you spend longer crafting the commit message than writing the actual code. "Was that a feat: or fix:?" you wonder, as your staged diff sits there perfectly describing everything while you faff about trying to translate it into prose.
You either end up with "Updated stuff" (again!) or some overwrought novel nobody will read. Meanwhile, cloud-based tools want to slurp up your "TODO: delete this abomination" comments and questionable variable names all while extracting money from your wallet ๐ธ
faff exists because your diffs already know what happened โ they just need a local AI translator that follows conventional commits rules without the existential crisis. Drop the faff, dodge the judgment, get back to coding.
So yes, faff is another bloody AI commit generator. The Internet's already drowning in them, so here's another one to add to the deluge of "my first AI projects" ๐ง faff started as me having a poke around the Ollama API while thinking "surely we can do this locally without sending the content of our wallets to the vibe-coding dealers?" It's basically a learning project that accidentally became useful โ like most of the best tools, really.
- ๐ค AI-Powered: Uses local Ollama LLMs for "intelligent" commit message generation
- ๐ Standards-Compliant: Follows Conventional Commits specification, most of the time if you're lucky
- ๏ธ๐ต๏ธ Privacy-First: Runs entirely locally - your code never leaves your machine, until you push it to GitHub
- ๐ค Simple Setup: Auto-downloads models and handles all dependencies, except it doesn't - that was a marketing lie
- ๐จ Beautiful UX: Elegant progress indicators and interactive prompts, for a shell script
๐ Quick Start
Prerequisites
- Ollama installed and running somewhere
- coreutils or uutils/coreutils
bc,curlandjq- Bash version 4.0 or later.
- A git repository with staged changes
Install
Download faff, make it executable and put it somewhere in your $PATH.
curl -o faff.sh https://raw.githubusercontent.com/wimpysworld/faff/refs/heads/main/faff.sh
chmod +x faff.sh
sudo mv faff.sh /usr/local/bin/faff
Basic Usage
The standard workflow is stage some changes and let faff generate your commit message.
git add .
faff
That's it! faff will analyze your changes and generate a commit message.
๐ง AI Models
I've mostly tested faff using the qwen2.5-coder family of models as they've worked best during my testing. Choose one based on your available VRAM or Unified memory:
| Model | VRAM | Speed | Quality |
|------------------------|-------|-------|------------|
| qwen2.5-coder:1.5b | ~1GB | โกโกโกโก | โญโญ |
| qwen2.5-coder:3b | ~2GB | โกโกโก | โญโญโญ |
| qwen2.5-coder:7b | ~5GB | โกโกโก | โญโญโญโญ |
| qwen2.5-coder:14b | ~9GB | โกโก | โญโญโญโญโญ |
| qwen2.5-coder:32b | ~20GB | โก | โญโญโญโญโญ |
Any model supported by Ollama will work so feel free to experiment ๐งช Share your feedback and observations in the faff discussions ๏ธ๐จ๏ธ so we can all benefit.
Using a Custom Model
To use a specific model, just override the FAFF_MODEL environment variable.
FAFF_MODEL="qwen2.5-coder:3b" faff
Environment Variables
Customize faff's behavior through environment variables:
# Model selection (default: qwen2.5-coder:7b)
export FAFF_MODEL="qwen2.5-coder:14b"
# Ollama connection (defaults to http://localhost:11434)
export OLLAMA_HOST="your-ollama-server.com"
export OLLAMA_PORT="11434"
export OLLAMA_PROTOCOL="http"
# Optional API key for Ollama, if the API is protected
export OLLAMA_TOKEN="sk-ollama-kasdjfhlwekjfhlashjehasjfgsdejsj"
# API timeout in seconds (default: 180)
export FAFF_TIMEOUT=300
๐ Git Integration
Add helpful aliases to your ~/.gitconfig:
[alias]
faff = "!faff" # Generate commit with faff
vibe = "!git add . && faff" # Stage all and commit with faff
๐ง Commitlint Integration
Got a commitlint config in your project? Lovely. faff will automatically detect it and constrain the AI to only use your allowed scopes - no configuration required, no extra dependencies, just works.
If faff finds a .commitlintrc.json or commitlint.config.json in your repository, it extracts the scopes from rules.scope-enum and tells the LLM to stick to them. Your commits get the proper type(scope): description format without the AI going off-piste with invented scopes.
No commitlint config? No worries - faff carries on exactly as before.
Example Config
Here's a commitlint config that faff will pick up:
{
"extends": ["@commitlint/config-conventional"],
"rules": {
"scope-enum": [2, "always", ["api", "cli", "docs", "tests"]]
}
}
With this config, faff will only generate commits using api, cli, docs, or tests as scopes. Keeps everything tidy without you having to remember what scopes exist.
๐ Troubleshooting
Common Issues
โ "Ollama service is not running"
Start Ollama.
ollama serve
โ "No changes to commit"
Stage some changes first.
git add .
๐ค Contributing
We welcome contributions! Whether you're fixing bugs, adding features, or improving documentation, your help makes faff better for everyone.
