Reporecall
Local codebase memory for Claude Code and MCP - AST indexing, call graphs, hybrid search. 0 tool calls, 3-8x fewer tokens.
Install / Use
/learn @proofofwork-agency/ReporecallQuality Score
Category
Development & EngineeringSupported Platforms
README
██████╗ ███████╗██████╗ ██████╗ ██████╗ ███████╗ ██████╗ █████╗ ██╗ ██╗
██╔══██╗██╔════╝██╔══██╗██╔═══██╗██╔══██╗██╔════╝██╔════╝██╔══██╗██║ ██║
██████╔╝█████╗ ██████╔╝██║ ██║██████╔╝█████╗ ██║ ███████║██║ ██║
██╔══██╗██╔══╝ ██╔═══╝ ██║ ██║██╔══██╗██╔══╝ ██║ ██╔══██║██║ ██║
██║ ██║███████╗██║ ╚██████╔╝██║ ██║███████╗╚██████╗██║ ██║███████╗███████╗
╚═╝ ╚═╝╚══════╝╚═╝ ╚═════╝ ╚═╝ ╚═╝╚══════╝ ╚═════╝╚═╝ ╚═╝╚══════╝╚══════╝
proofofwork
Local codebase memory and project knowledge for Claude Code
Claude Code greps your codebase one file at a time. Reporecall gives it the whole picture instantly. Pre-indexed search, call graph, and structured project memory - injected before Claude starts thinking. No grep chains, no wasted tool calls, no re-reading files it already saw. (AST chunking, hybrid FTS + vector search, bidirectional call graph, token-budgeted context assembly.)
Indexes locally, remembers across sessions, runs entirely on your machine. One hook, 18 tools, zero cloud dependency.
The Problem
You ask Claude: "how does the credit refund work when a job fails?"
Claude doesn't know your codebase. So it starts searching:
Grep "refundCredits" → found credit-utils.ts
Read credit-utils.ts → ok, but who calls this?
Grep "refundCredits" → found job-completion.ts
Read job-completion.ts → found processJobCompletion, but what about failures?
Grep "processJobFailure" → found another file
Read that file too → finally has the picture
6 tool calls. 4 round-trips. ~15,000 tokens. And it still missed the error handler sites.
The same happens across sessions. Claude has no memory of the architectural decision you explained yesterday, the naming convention you corrected last week, or the half-finished refactor you left mid-flight. Every session starts from zero.
With Reporecall
Same question. Reporecall's hook fires before the prompt reaches Claude:
→ Search index: "credit refund job fails" (5ms, keyword + vector)
→ Top hit: refundCredits()
→ Call graph expansion: who calls refundCredits?
├─ processJobCompletion() (job-completion.ts)
└─ processJobFailure() (job-completion.ts)
→ Inject context into prompt (~2K tokens)
0 tool calls. 1 round-trip. Claude already has the full picture - the function, its callers, and the failure path - before it writes a single word.
┌────────────────────┬──────────────────────┬─────────────────┐
│ │ Without Reporecall │ With Reporecall │
├────────────────────┼──────────────────────┼─────────────────┤
│ Tool calls │ 6 │ 0 │
│ Round-trips │ 4 │ 1 │
│ Tokens consumed │ ~15,000 │ ~2,000 │
│ Latency │ seconds │ ~5ms │
│ Found the callers │ after 3 extra greps │ automatically │
│ Found error sites │ no │ yes (10 files) │
└────────────────────┴──────────────────────┴─────────────────┘
Quick Start
npm install -g @proofofwork-agency/reporecall
reporecall init # creates .memory/, hooks, MCP config
reporecall index # indexes your codebase
reporecall serve # starts daemon with file watcher
Then ask Claude questions normally. The hook injects relevant code and memory context before Claude answers.
Daily Workflow
reporecall serve # start once, runs all day with file watching
Then use Claude normally:
"How does the credit refund work when a job fails?"
→ Code context injected: refundCredits(), its callers, error handlers
"What did we decide about the auth token format?"
→ Memory recalled: stored rule about JWT structure from last week
"Walk me through the payment flow"
→ R1 flow tree: payment entry point → validation → charge → receipt
"Who calls validateUserInput?"
→ Call graph expansion shows all caller sites with surrounding code
What's New in v0.3.0
- Memory V1. Persistent cross-session memory stores project knowledge, coding conventions, user preferences, and working state as markdown files in
.memory/reporecall-memories/. Four memory classes (rule,fact,episode,working) with independent token budgets. Seven new MCP tools for memory management. - Broad workflow search. New
selectBroadWorkflowBundlehandles architecture and inventory queries with corpus-aware term expansion and import corroboration. R2 NDCG@10 improved from 0.058 to 0.351. - Target resolution catalog. New
TargetStoreindexes symbols, file modules, endpoints, and routes with alias-based lookup. Literal-dispatch resolution:invoke("generate-image")resolves to the handler file. - Query-path performance. Seed resolution cache eliminates 2-3 redundant
resolveSeeds()calls per search (~8-15ms saved). Query embedding LRU cache (50 entries) saves 15-40ms per hit. Route accuracy improved from 81.5% to 87%. - Robustness fixes.
sanitizeQuerystrips<system-reminder>,<tool-result>, andantml:*XML blocks. Short conversational directives correctly classified as skip. Tree-sitter parse errors fall back to whole-file chunks.
Features
Code Search & Context Injection
- AST chunking - Tree-sitter parses 22 languages into functions, classes, methods, interfaces, and exports. Files with no extractable nodes fall back to file-level chunks.
- Hybrid search - FTS5 keyword search (Porter stemming, camelCase splitting) fused with cosine-similarity vector search via Reciprocal Rank Fusion.
- Call graph expansion - Top search hits are expanded through a static call graph to surface callers and callees automatically, without extra round-trips.
- Intent classification - Rule-based classifier (zero LLM tokens, <1ms) routes queries to R0 (fast), R1 (flow), R2 (broad), or SKIP.
- Token-budgeted assembly - Results are assembled into markdown code blocks under an auto-scaled token budget, with length penalty, test file demotion, and score floor filtering.
Memory V1
Persistent cross-session memory layer for project knowledge, user preferences, and working state.
Four memory classes:
| Class | Purpose | Lifecycle |
| --------- | ----------------------------------------------- | ----------------------------------------------- |
| rule | Behavioral directives that override defaults | Highest injection priority, survives compaction |
| fact | Stable project knowledge and reference material | Survives compaction indefinitely |
| episode | Session-specific observations and decisions | Archived after 30 days by compaction |
| working | Transient context generated during a session | Cleared between sessions or on explicit reset |
Each class has an independent token budget, so rules are never crowded out by verbose episodes. Memories are stored as markdown files with YAML frontmatter in .memory/reporecall-memories/, indexed by FTS, and injected alongside code context on every hook query.
Compaction deduplicates by content fingerprint, archives stale episodes, and promotes recurring patterns from episode to fact. Pinned memories survive compaction unconditionally.
7 MCP tools: recall_memories, store_memory, forget_memory, list_memories, explain_memory, compact_memories, clear_working_memory.
MCP Integration (18 tools)
Reporecall exposes 18 MCP tools over stdio - 11 for code search and analysis, 7 for memory. The MCP server integrates with Claude Code through the auto-generated .mcp.json configuration.
Code Search & Analysis:
| Tool | Description |
| ------------------ | ----------------------------------------------------------- |
| search_code | Search the codebase using hybrid vector + keyword search |
| find_callers | Find functions that call a given function |
| find_callees | Find functions called by a given function |
| resolve_seed | Resolve a query to seed candidates for stack tree building |
| build_stack_tree | Build a bidirectional call tree from a seed function/method |
| get_imports | Get import statements for a file |
| get_symbol | Look up code symbols by name |
| explain_flow | Explain the call flow around a query or function name |
| index_codebase | Index or re-index the codebase |
| get_stats | Get index statistics, conventions, and latency info |
| clear_index | Clear all indexed data |
Memory:
| Tool | Description |
| ---------------------- | -------------------------------------------------------------- |
| recall_memories | Search project and user memories using local keyword retrieval |
| explain_memory | Explain how memory recall would behave for a query |
| compact_memories | Refresh and compact memory indexes |
| clear_working_memory | Clear generated working memory entries |
| store_memory | Create or update a memory file |
| forget_memory | Delete a memory by name |
| list_memories | List all stored memories with metadata |
Search Modes
- R0 - Fast Path: Hybrid keyword + vector search for direct lookups. Best for specific symbols or exact terms. Typical latency ~10ms.
- R1 - Flow Path: Bidirectional call-tree traver
