SkillAgentSearch skills...

Codegraph

Code intelligence CLI — function-level dependency graph across 11 languages, 30-tool MCP server for AI agents, complexity metrics, architecture boundary enforcement, CI quality gates, git diff impact with co-change analysis, hybrid semantic search. Fully local, zero API keys required.

Install / Use

/learn @optave/Codegraph

README

<p align="center"> <img src="https://img.shields.io/badge/codegraph-dependency%20intelligence-blue?style=for-the-badge&logo=graphql&logoColor=white" alt="codegraph" /> </p> <h1 align="center">codegraph</h1> <p align="center"> <strong>Give your AI the map before it starts exploring.</strong> </p> <p align="center"> <a href="https://www.npmjs.com/package/@optave/codegraph"><img src="https://img.shields.io/npm/v/@optave/codegraph?style=flat-square&logo=npm&logoColor=white&label=npm" alt="npm version" /></a> <a href="https://github.com/optave/codegraph/blob/main/LICENSE"><img src="https://img.shields.io/github/license/optave/codegraph?style=flat-square&logo=opensourceinitiative&logoColor=white" alt="Apache-2.0 License" /></a> <a href="https://github.com/optave/codegraph/actions"><img src="https://img.shields.io/github/actions/workflow/status/optave/codegraph/codegraph-impact.yml?style=flat-square&logo=githubactions&logoColor=white&label=CI" alt="CI" /></a> <img src="https://img.shields.io/badge/node-%3E%3D20-339933?style=flat-square&logo=node.js&logoColor=white" alt="Node >= 20" /> </p> <p align="center"> <a href="#the-problem">The Problem</a> &middot; <a href="#what-codegraph-does">What It Does</a> &middot; <a href="#-quick-start">Quick Start</a> &middot; <a href="#-commands">Commands</a> &middot; <a href="#-language-support">Languages</a> &middot; <a href="#-ai-agent-integration-core">AI Integration</a> &middot; <a href="#-how-it-works">How It Works</a> &middot; <a href="#-recommended-practices">Practices</a> &middot; <a href="#-roadmap">Roadmap</a> </p>

The Problem

AI agents face an impossible trade-off. They either spend thousands of tokens reading files to understand a codebase's structure — blowing up their context window until quality degrades — or they assume how things work, and the assumptions are often wrong. Either way, things break. The larger the codebase, the worse it gets.

An agent modifies a function without knowing 9 files import it. It misreads what a helper does and builds logic on top of that misunderstanding. It leaves dead code behind after a refactor. The PR gets opened, and your reviewer — human or automated — flags the same structural issues again and again: "this breaks 14 callers," "that function already exists," "this export is now dead." If the reviewer catches it, that's multiple rounds of back-and-forth. If they don't, it can ship to production. Multiply that by every PR, every developer, every repo.

The information to prevent these issues exists — it's in the code itself. But without a structured map, agents lack the context to get it right consistently, reviewers waste cycles on preventable issues, and architecture degrades one unreviewed change at a time.

What Codegraph Does

Codegraph builds a function-level dependency graph of your entire codebase — every function, every caller, every dependency — and keeps it current with sub-second incremental rebuilds.

It parses your code with tree-sitter (native Rust or WASM), stores the graph in SQLite, and exposes it where it matters most:

  • MCP server — AI agents query the graph directly through 30 tools — one call instead of 30 grep/find/cat invocations
  • CLI — developers and agents explore, query, and audit code from the terminal
  • CI gatescheck and manifesto commands enforce quality thresholds with exit codes
  • Programmatic API — embed codegraph in your own tools via npm install

Instead of an agent editing code without structural context and letting reviewers catch the fallout, it knows "this function has 14 callers across 9 files" before it touches anything. Dead exports, circular dependencies, and boundary violations surface during development — not during review. The result: PRs that need fewer review rounds.

Free. Open source. Fully local. Zero network calls, zero telemetry. Your code stays on your machine. When you want deeper intelligence, bring your own LLM provider — your code only goes where you choose to send it.

Three commands to a queryable graph:

npm install -g @optave/codegraph
cd your-project
codegraph build

No config files, no Docker, no JVM, no API keys, no accounts. Point your agent at the MCP server and it has structural awareness of your codebase.

Why it matters

| | Without codegraph | With codegraph | |---|---|---| | Code review | Reviewers flag broken callers, dead code, and boundary violations round after round | Structural issues are caught during development — PRs pass review with fewer rounds | | AI agents | Modify parseConfig() without knowing 9 files import it — reviewer catches it | fn-impact parseConfig shows every caller before the edit — agent fixes it proactively | | AI agents | Leave dead exports and duplicate helpers behind after refactors | Dead code, cycles, and duplicates surface in real time via hooks and MCP queries | | AI agents | Produce code that works but doesn't fit the codebase structure | context <name> -T returns source, deps, callers, and tests — the agent writes code that fits | | CI pipelines | Catch test failures but miss structural degradation | check --staged fails the build when blast radius or complexity thresholds are exceeded | | Developers | Inherit a codebase and grep for hours to understand what calls what | context handleAuth -T gives the same structured view agents use | | Architects | Draw boundary rules that erode within weeks | manifesto and boundaries enforce architecture rules on every commit |

Feature comparison

<sub>Comparison last verified: March 2026. Claims verified against each repo's README/docs. Full analysis: <a href="generated/competitive/COMPETITIVE_ANALYSIS.md">COMPETITIVE_ANALYSIS.md</a></sub>

| Capability | codegraph | joern | narsil-mcp | cpg | axon | GitNexus | |---|:---:|:---:|:---:|:---:|:---:|:---:| | Languages | 11 | ~12 | 32 | ~10 | 3 | 13 | | MCP server | Yes | — | Yes | Yes | Yes | Yes | | Dataflow + CFG + AST querying | Yes | Yes | Yes¹ | Yes | — | — | | Hybrid search (BM25 + semantic) | Yes | — | — | — | Yes | Yes | | Git-aware (diff impact, co-change, branch diff) | All 3 | — | — | — | All 3 | — | | Dead code / role classification | Yes | — | Yes | — | Yes | — | | Incremental rebuilds | O(changed) | — | O(n) | — | Yes | Commit-level⁴ | | Architecture rules + CI gate | Yes | — | — | — | — | — | | Security scanning (SAST / vuln detection) | Intentionally out of scope² | Yes | Yes | Yes | — | — | | Zero config, npm install | Yes | — | Yes | — | Yes | Yes | | Graph export (GraphML / Neo4j / DOT) | Yes | Yes | — | — | — | — | | Open source + commercial use | Yes (Apache-2.0) | Yes (Apache-2.0) | Yes (MIT/Apache-2.0) | Yes (Apache-2.0) | Source-available³ | Non-commercial⁵ |

<sup>¹ narsil-mcp added CFG and dataflow in recent versions. ² Codegraph focuses on structural understanding, not vulnerability detection — use dedicated SAST tools (Semgrep, CodeQL, Snyk) for that. ³ axon claims MIT in pyproject.toml but has no LICENSE file in the repo. ⁴ GitNexus skips re-index if the git commit hasn't changed, but re-processes the entire repo when it does — no per-file incremental parsing. ⁵ GitNexus uses the PolyForm Noncommercial 1.0.0 license.</sup>

What makes codegraph different

| | Differentiator | In practice | |---|---|---| | 🤖 | AI-first architecture | 30-tool MCP server — agents query the graph directly instead of scraping the filesystem. One call replaces 20+ grep/find/cat invocations | | 🏷️ | Role classification | Every symbol auto-tagged as entry/core/utility/adapter/dead/leaf — agents understand a symbol's architectural role without reading surrounding code | | 🔬 | Function-level, not just files | Traces handleAuth()validateToken()decryptJWT() and shows 14 callers across 9 files break if decryptJWT changes | | | Always-fresh graph | Three-tier change detection: journal (O(changed)) → mtime+size (O(n) stats) → hash (O(changed) reads). Sub-second rebuilds — agents work with current data | | 💥 | Git diff impact | codegraph diff-impact shows changed functions, their callers, and full blast radius — enriched with historically coupled files from git co-change analysis. Ships with a GitHub Actions workflow | | 🌐 | Multi-language, one graph | JS/TS + Python + Go + Rust + Java + C# + PHP + Ruby + HCL in a single graph — agents don't need per-language tools | | 🧠 | Hybrid search | BM25 keyword + semantic embeddings fused via RRF — hybrid (default), semantic, or keyword mode; multi-query via "auth; token; JWT" | | 🔬 | Dataflow + CFG | Track how data flows through functions (flows_to, returns, mutates) and visualize intraprocedural control flow graphs for all 11 languages | | 🔓 | Fully local, zero cost | No API keys, no accounts, no network calls. Optionally bring your own LLM provider — your code only goes where you choose |


🚀 Quick Start

npm install -g @optave/codegraph
cd your-project
codegraph build        # → .codegraph/graph.db created

That's it. The graph is ready. Now connect your AI agent.

For AI agents (primary use case)

Connect directly via MCP — your agent gets 30 tools to query the graph:

codegraph mcp          # 33-tool MCP server — AI queries the graph directly

Or add codegraph to your agent's instructions (e.g. CLAUDE.md):

Before modifying code, always:
1. `codegraph where <n

Related Skills

View on GitHub
GitHub Stars33
CategoryDevelopment
Updated2h ago
Forks4

Languages

JavaScript

Security Score

95/100

Audited on Mar 22, 2026

No findings