SkillAgentSearch skills...

Pommel

No description available

Install / Use

/learn @dbinky/Pommel
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Pommel

Local-first semantic code search for AI coding agents.

CI Go Version License

v0.7.3 - Configurable timeouts for cold starts and slow connections!

Pommel maintains a vector database of your code, enabling fast semantic search without loading files into context. Designed to complement AI coding assistants by providing targeted code discovery.

Features

  • Hybrid search - Combines semantic vector search with keyword search (FTS5) using Reciprocal Rank Fusion for best-of-both-worlds results.
  • Intelligent re-ranking - Heuristic signals boost results based on name matches, exact phrases, file paths, recency, and code structure.
  • Smart chunk splitting - Automatically splits large methods/functions with overlap to stay within embedding context limits. Multiple split matches boost result scores.
  • Semantic code search - Find code by meaning, not just keywords. Search for "rate limiting logic" and find relevant implementations regardless of naming conventions.
  • Always-fresh file watching - Automatic file system monitoring keeps your index synchronized with code changes. No manual reindexing required.
  • Multi-level chunks - Search at file, class/module, or method/function granularity for precise results.
  • Minified file detection - Automatically skips minified JavaScript/CSS files that produce low-quality chunks.
  • Low latency local embeddings - All processing happens locally via Ollama with Jina Code Embeddings v2 (768-dim vectors).
  • Context savings metrics - See how much context window you're saving compared to grep-based approaches with --metrics.
  • JSON output for agents - All commands support --json flag for structured output, optimized for AI agent consumption.

Installation

Quick Install (Recommended)

macOS / Linux:

curl -fsSL https://raw.githubusercontent.com/dbinky/Pommel/main/scripts/install.sh | bash

Windows (PowerShell):

irm https://raw.githubusercontent.com/dbinky/Pommel/main/scripts/install.ps1 | iex

This will:

  • Download pre-built binaries (or build from source on Unix)
  • Install pm and pommeld to your PATH
  • Install Ollama if not present
  • Pull the embedding model (~300MB)

Prerequisites

Ollama is required for generating embeddings. The install scripts handle this automatically, but you can install manually:

# macOS
brew install ollama

# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Windows (winget)
winget install Ollama.Ollama

Manual Install

Download binaries from releases:

| Platform | Architecture | CLI | Daemon | |----------|--------------|-----|--------| | macOS | Intel | pm-darwin-amd64 | pommeld-darwin-amd64 | | macOS | Apple Silicon | pm-darwin-arm64 | pommeld-darwin-arm64 | | Linux | x64 | pm-linux-amd64 | pommeld-linux-amd64 | | Windows | x64 | pm-windows-amd64.exe | pommeld-windows-amd64.exe |

Then pull the embedding model:

ollama pull unclemusclez/jina-embeddings-v2-base-code

Building from Source

# Clone and build
git clone https://github.com/dbinky/Pommel.git
cd Pommel
make build

# Install to PATH (Unix)
cp bin/pm bin/pommeld ~/.local/bin/

Quick Start

# Navigate to your project
cd your-project

# Initialize Pommel
pm init

# Start the daemon (begins indexing automatically)
pm start

# Search for code semantically
pm search "user authentication"

# Check indexing status
pm status

CLI Commands

pm init

Initialize Pommel in the current directory. Creates .pommel/ directory with configuration files.

pm init                    # Initialize with defaults
pm init --auto             # Auto-detect languages and configure
pm init --claude           # Also add usage instructions to CLAUDE.md
pm init --start            # Initialize and start daemon immediately

pm start / pm stop

Control the Pommel daemon for the current project.

pm start                   # Start daemon in background
pm start --foreground      # Start in foreground (for debugging)
pm stop                    # Stop the running daemon

pm search <query>

Hybrid search across the codebase. Combines semantic vector search with keyword matching, then re-ranks results using code-aware heuristics.

# Basic search
pm search "authentication middleware"

# Limit results
pm search "database connection" --limit 20

# Filter by chunk level
pm search "error handling" --level method
pm search "service classes" --level class

# Filter by path
pm search "api handler" --path src/api/

# JSON output (for agents)
pm search "user validation" --json --limit 5

# Verbose output with match reasons and score breakdown
pm search "rate limiting" --verbose

# Show context savings metrics
pm search "database queries" --metrics

# Disable hybrid search (vector-only)
pm search "config parsing" --no-hybrid

# Disable re-ranking stage
pm search "utility functions" --no-rerank

Options:

| Flag | Short | Description | |------|-------|-------------| | --limit | -n | Maximum number of results (default: 10) | | --level | -l | Chunk level filter: file, class, method | | --path | -p | Path prefix filter | | --json | -j | Output as JSON (agent-friendly) | | --verbose | -v | Show detailed match reasons and score breakdown | | --metrics | | Show context savings vs grep baseline | | --no-hybrid | | Disable hybrid search (vector-only mode) | | --no-rerank | | Disable re-ranking stage |

Example JSON Output:

{
  "query": "user authentication",
  "results": [
    {
      "id": "chunk-abc123",
      "file": "src/auth/middleware.py",
      "start_line": 15,
      "end_line": 45,
      "level": "class",
      "language": "python",
      "name": "AuthMiddleware",
      "score": 0.89,
      "content": "class AuthMiddleware:\n    ...",
      "match_source": "both",
      "match_reasons": ["semantic similarity", "keyword match via BM25", "contains 'auth' in name"],
      "score_details": {
        "vector_score": 0.85,
        "keyword_score": 0.72,
        "rrf_score": 0.89
      },
      "parent": {
        "id": "chunk-parent123",
        "name": "auth.middleware",
        "level": "file"
      }
    }
  ],
  "total_results": 1,
  "search_time_ms": 42,
  "hybrid_enabled": true,
  "rerank_enabled": true
}

pm status

Show daemon status and indexing statistics.

pm status                  # Human-readable output
pm status --json           # JSON output

Example Output:

{
  "daemon": {
    "running": true,
    "pid": 12345,
    "uptime_seconds": 3600
  },
  "index": {
    "total_files": 342,
    "total_chunks": 4521,
    "last_indexed": "2025-01-15T10:30:00Z",
    "pending_changes": 0
  },
  "health": {
    "status": "healthy",
    "embedding_model": "loaded",
    "database": "connected"
  }
}

pm reindex

Force a full re-index of the project. Useful after major refactors or if the index becomes corrupted.

pm reindex                 # Reindex all files
pm reindex --path src/     # Reindex specific path only

pm config

View or modify project configuration.

pm config                              # Show current configuration
pm config get embedding.ollama_url     # Get specific setting
pm config set watcher.debounce_ms 1000 # Update setting
pm config set daemon.port 7421         # Change daemon port
pm config set search.default_levels method,class,file  # Set search levels

Configuration

Configuration is stored in .pommel/config.yaml:

version: 1

# Chunk levels to generate
chunk_levels:
  - method
  - class
  - file

# File patterns to include
include_patterns:
  - "**/*.cs"
  - "**/*.py"
  - "**/*.js"
  - "**/*.ts"
  - "**/*.jsx"
  - "**/*.tsx"

# File patterns to exclude
exclude_patterns:
  - "**/node_modules/**"
  - "**/bin/**"
  - "**/obj/**"
  - "**/__pycache__/**"
  - "**/.git/**"
  - "**/.pommel/**"

# File watcher settings
watcher:
  debounce_ms: 500           # Debounce delay for file changes
  max_file_size: 1048576     # Skip files larger than this (bytes)

# Daemon settings
daemon:
  host: "127.0.0.1"
  port: 7420
  log_level: "info"

# Embedding settings
embedding:
  model: "unclemusclez/jina-embeddings-v2-base-code"
  ollama_url: "http://localhost:11434"
  batch_size: 32
  cache_size: 1000

# Search defaults
search:
  default_limit: 10
  default_levels:
    - method
    - class

# Hybrid search settings (v0.5.0+)
hybrid_search:
  enabled: true              # Enable hybrid vector + keyword search
  rrf_k: 60                  # RRF constant (higher = more weight to lower ranks)
  vector_weight: 1.0         # Weight for vector search results
  keyword_weight: 1.0        # Weight for keyword search results

# Re-ranker settings (v0.5.0+)
reranker:
  enabled: true              # Enable heuristic re-ranking
  model: "heuristic"         # Re-ranking model (currently only "heuristic")
  timeout_ms: 100            # Timeout for re-ranking
  candidates: 50             # Number of candidates to re-rank

Embedding Providers

Pommel supports multiple embedding providers for flexibility:

| Provider | Type | Cost | Best For | |----------|------|------|----------| | Local Ollama | Local | Free | Default, privacy-focused | | Remote Ollama | Remote | Free | Offload to server/NAS | | OpenAI | API | $0.02/1M tokens | Easy setup, existing key | | Voyage AI | API | $0.06/1M tokens | Code-specialized |

Quick Configuration

# Interactive setup (recommended)
pm config provider

# Or set directly
pm config provider ollama          
View on GitHub
GitHub Stars110
CategoryDevelopment
Updated1d ago
Forks7

Languages

Go

Security Score

90/100

Audited on Apr 6, 2026

No findings