Aicommit
A CLI tool that generates concise and descriptive git commit messages using LLM
Install / Use
/learn @suenot/AicommitREADME
aicommit

A CLI tool that generates concise and descriptive git commit messages using LLMs (Large Language Models).
Features
Implemented Features
- ✅ Uses LLMs to generate meaningful commit messages from your changes
- ✅ Supports multiple LLM providers:
- OpenRouter (cloud)
- Simple Free OpenRouter (automatically uses best available free models)
- Ollama (local)
- OpenAI-compatible endpoints (LM Studio, local OpenAI proxy, etc.)
- ✅ Automatically stages changes with
--addoption - ✅ Pushes commits automatically with
--push - ✅ Push to all remotes at once with
--push-all(github, gitlab, etc.) - ✅ Interactive mode with
--dry-run - ✅ Watch mode with
--watch - ✅ Verbose mode with
--verbose - ✅ Version control helpers:
- Automatic version bumping (
--version-iterate) - Cargo.toml version sync (
--version-cargo) - package.json version sync (
--version-npm) - GitHub version update (
--version-github)
- Automatic version bumping (
- ✅ Smart retry mechanism for API failures
- ✅ Easy configuration management
- ✅ VS Code extension available
Simple Free Mode
The Simple Free mode allows you to use OpenRouter's free models without having to manually select a model. You only need to provide an OpenRouter API key, and the system will:
- Automatically query OpenRouter for currently available free models
- Select the best available free model based on an internally ranked list
- Automatically switch to alternative models using an advanced failover mechanism
- Track model performance with a sophisticated jail/blacklist system
- Fall back to predefined free models if network connectivity is unavailable
To set up Simple Free mode:
# Interactive setup
aicommit --add-provider
# Select "Simple Free OpenRouter" from the menu
# Or non-interactive setup
aicommit --add-simple-free --openrouter-api-key=<YOUR_API_KEY>
Advanced Failover Mechanism
The Simple Free mode uses a sophisticated failover mechanism to ensure optimal model selection:
- Three-Tier Model Status: Models are categorized as
Active,Jailed(temporary restriction), orBlacklisted(long-term ban). - Counter-Based System: Tracks success/failure ratio for each model; 3 consecutive failures move a model to
Jailedstatus. - Time-Based Jail: Models are temporarily jailed for 24 hours after repeated failures, with increasing jail time for recidivism.
- Blacklist Management: Models with persistent failures over multiple days are blacklisted but retried weekly.
- Success Rate Tracking: Records performance history to prioritize more reliable models.
- Smart Reset: Models get fresh chances daily, and users can manually reset with
--unjailand--unjail-allcommands. - Network Error Handling: Distinguishes between model errors and connection issues to avoid unfair penalties.
Model management commands:
# Show status of all model jails/blacklists
aicommit --jail-status
# Release specific model from restrictions
aicommit --unjail <model-id>
# Release all models from restrictions
aicommit --unjail-all
Benefits of Simple Free Mode
- Zero Cost: Uses only free models from OpenRouter
- Automatic Selection: No need to manually choose the best free model
- Resilient Operation: If one model fails, it automatically switches to the next best model
- Advanced Failover: Uses a sophisticated system to track model performance over time
- Learning Algorithm: Adapts to changing model reliability by tracking success rates
- Self-Healing: Automatically retries previously failed models after a cooling-off period
- Network Resilience: Works even when network connectivity to OpenRouter is unavailable by using predefined models
- Always Up-to-Date: Checks for currently available free models each time
- Best Quality First: Uses a predefined ranking of models, prioritizing the most powerful ones
- Future-Proof: Intelligently handles new models by analyzing model names for parameter counts
The ranked list includes powerful models like:
- Meta's Llama 4 Maverick and Scout
- NVIDIA's Nemotron Ultra models (253B parameters)
- Qwen's massive 235B parameter models
- Many large models from the 70B+ parameter family
- And dozens of other high-quality free options of various sizes
Even if the preferred models list becomes outdated over time, the system will intelligently identify the best available models based on their parameter size by analyzing model names (e.g., models with "70b" or "32b" in their names).
For developers who want to see all available free models, a utility script is included:
python bin/get_free_models.py
This script will:
- Fetch all available models from OpenRouter
- Identify which ones are free
- Save the results to JSON and text files for reference
- Display a summary of available options
Installation
To install aicommit, use the following npm command:
npm install -g @suenot/aicommit
For Rust users, you can install using cargo:
cargo install aicommit
Quick Start
- Set up a provider:
aicommit --add-provider
- Generate a commit message:
git add .
aicommit
- Or stage and commit in one step:
aicommit --add
Provider Management
Add a provider in interactive mode:
aicommit --add-provider
Add providers in non-interactive mode:
# Add OpenRouter provider
aicommit --add-provider --add-openrouter --openrouter-api-key "your-api-key" --openrouter-model "mistralai/mistral-tiny"
# Add Ollama provider
aicommit --add-provider --add-ollama --ollama-url "http://localhost:11434" --ollama-model "llama2"
# Add OpenAI compatible provider
aicommit --add-provider --add-openai-compatible \
--openai-compatible-api-key "your-api-key" \
--openai-compatible-api-url "https://api.deep-foundation.tech/v1/chat/completions" \
--openai-compatible-model "gpt-4o-mini"
Optional parameters for non-interactive mode:
--max-tokens- Maximum number of tokens (default: 50)--temperature- Controls randomness (default: 0.3)
List all configured providers:
aicommit --list
Set active provider:
aicommit --set <provider-id>
Version Management
aicommit supports automatic version management with the following features:
- Automatic version incrementation using a version file:
aicommit --version-file version --version-iterate
- Synchronize version with Cargo.toml:
aicommit --version-file version --version-iterate --version-cargo
- Synchronize version with package.json:
aicommit --version-file version --version-iterate --version-npm
- Update version on GitHub (creates a new tag):
aicommit --version-file version --version-iterate --version-github
You can combine these flags to update multiple files at once:
aicommit --version-file version --version-iterate --version-cargo --version-npm --version-github
VS Code Extension
aicommit now includes a VS Code extension for seamless integration with the editor:
- Navigate to the vscode-extension directory
cd vscode-extension
- Install the extension locally for development
code --install-extension aicommit-vscode-0.1.0.vsix
Or build the extension package manually:
# Install vsce if not already installed
npm install -g @vscode/vsce
# Package the extension
vsce package
Once installed, you can generate commit messages directly from the Source Control view in VS Code by clicking the "AICommit: Generate Commit Message" button.
See the VS Code Extension README for more details.
Configuration
The configuration file is stored at ~/.aicommit.json. You can edit it directly with:
aicommit --config
Global Configuration
The configuration file supports the following global settings:
{
"providers": [...],
"active_provider": "provider-id",
"retry_attempts": 3 // Number of attempts to generate commit message if provider fails
}
retry_attempts: Number of retry attempts if provider fails (default: 3)- Waits 5 seconds between attempts
- Shows informative messages about retry progress
- Can be adjusted based on your needs (e.g., set to 5 for less stable providers)
Provider Configuration
Each provider can be configured with the following settings:
max_tokens: Maximum number of tokens in the response (default: 200)temperature: Controls randomness in the response (0.0-1.0, default: 0.3)
Example configuration with all options:
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "openrouter",
"api_key": "sk-or-v1-...",
"model": "mistralai/mistral-tiny",
"max_tokens": 200,
"temperature": 0.3
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000",
"retry_attempts": 3
}
For OpenRouter, token costs are automatically fetched from their API. For Ollama, you can specify your own costs if you want to track usage.
Supported LLM Providers
Simple Free OpenRouter
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "simple_free_openrouter",
