Engram
AI Intelligence Platform -- knowledge graph + semantic search + reasoning + multi-agent debate in a single binary
Install / Use
/learn @dx111ge/EngramQuality Score
Category
Development & EngineeringSupported Platforms
README
Engram v1.1.2
AI Intelligence Platform -- knowledge graph + semantic search + reasoning + learning in a single binary.


What is Engram?
Engram is a self-hosted AI knowledge engine that combines graph storage, semantic search, logical reasoning, and continuous learning into a single Rust binary with a single .brain file. No cloud, no external dependencies.
- Single binary -- no runtime dependencies, no Docker, no cloud
- Single file -- one
.brainfile is your entire knowledge base. Copy = backup, move = migrate - No external database -- custom mmap storage, everything built in
- Hybrid search -- BM25 full-text + HNSW vector similarity + bitmap filtering
- Confidence lifecycle -- knowledge strengthens with confirmation, weakens with time, corrects on contradiction
- Inference engine -- forward/backward chaining, rule evaluation, transitive reasoning
- Ingest pipeline -- NER (GLiNER2 ONNX, GPU-accelerated), entity resolution, conflict detection, PDF/HTML/table extraction
- Multi-agent debate -- 7 analysis modes with War Room live dashboard and 3-layer synthesis
- Chat system -- 47 tools across 8 clusters (analysis, investigation, reporting, temporal, assessment)
- Assessment engine -- Bayesian confidence with living assessments and evidence boards
- Temporal facts -- valid_from / valid_to on edges with automatic extraction
- Contradiction detection -- automatic conflict detection with resolution workflows
- Knowledge mesh -- peer-to-peer sync with ed25519 identity and trust scoring
- Built-in web UI -- Leptos WASM frontend with 3D graph visualization, onboarding wizard, and SSE live updates
- Multiple APIs -- HTTP REST (230+ endpoints), MCP, gRPC, A2A, LLM tool-calling
Who is Engram for?
As a backend memory layer -- integrate Engram into your AI pipeline via REST, MCP, or gRPC. Use the onboarding wizard once, then run headless.
As an intelligence workbench -- ingest documents, build knowledge graphs, run multi-agent debate, assess with Bayesian confidence. Full web UI with interactive 3D graph, chat, and War Room.
Quick Start
1. Download
Download the latest release from Releases.
| Platform | Download |
|----------|----------|
| Windows x86_64 | engram-windows-x86_64.zip |
| Linux x86_64 | engram-linux-x86_64.zip |
| Linux aarch64 | engram-linux-aarch64.zip |
| macOS aarch64 | engram-macos-aarch64.zip |
Unzip and run. The web UI frontend is bundled inside the zip.
2. Start
engram serve my.brain
# HTTP API + Web UI: http://localhost:3030
3. Configure
Open http://localhost:3030 -- the onboarding wizard guides you through setup.
We recommend Gemma 4 as the LLM. Run it locally with Ollama:
ollama pull gemma4:e4b
Any OpenAI-compatible LLM endpoint works (Ollama, vLLM, OpenAI, Azure, etc.).
Web UI
Four sections accessible after login:
- Knowledge -- interactive 3D graph explorer, entity search, Knowledge Chat with 47 tools
- Insights -- knowledge stats, contradictions, documents, intelligence gaps
- Debate -- 7 AI analysis modes: Analyze, Red Team, Outcome Engineering, Scenario Forecast, Stakeholder Simulation, Pre-mortem, Decision Matrix
- System -- hardware, embeddings, NER, LLM config, web search providers, ingestion sources, domain taxonomy
CLI Reference
| Command | Description |
|---------|-------------|
| engram create [path] | Create a new .brain file |
| engram store <label> [path] | Store a node |
| engram relate <from> <rel> <to> [path] | Create a relationship |
| engram query <label> [depth] [path] | Query and traverse edges |
| engram search <query> [path] | Search (BM25, filters, boolean) |
| engram serve [path] [addr] | Start HTTP + gRPC server |
| engram mcp [path] | Start MCP server (stdio) |
| engram reindex [path] | Re-embed all nodes after model change |
| engram stats [path] | Show node and edge counts |
| engram delete <label> [path] | Soft-delete a node |
Documentation
| Page | Description | |------|-------------| | Getting Started | Download, install, first brain, quick start | | Configuration | Onboarding wizard, LLM setup, embeddings, SearXNG | | HTTP API | Full REST API reference (230+ endpoints) | | MCP Server | MCP tools for Claude, Cursor, Windsurf (24 tools) | | Python Integration | EngramClient, bulk import, LangChain, auth, debate, chat | | SearxNG Setup | Self-hosted web search: installation, engines, rate limits | | Architecture | System design, layers, storage engine, compute | | Use Cases | 13 end-to-end walkthroughs with Python demos |
Use Cases
| # | Use Case | Description | |---|----------|-------------| | 1 | Wikipedia Import | Build a knowledge graph from Wikipedia summaries | | 2 | Document Import | Ingest markdown/text with metadata and entity extraction | | 3 | Inference & Reasoning | Vulnerability propagation and SLA mismatch detection | | 4 | Support Knowledge Base | IT support error/cause/solution graphs | | 5 | Threat Intelligence | Threat actor, malware, CVE, and TTP graphs | | 6 | Learning Lifecycle | Full lifecycle: store, reinforce, correct, decay, archive | | 7 | OSINT | Open Source Intelligence with multi-source correlation | | 8 | Fact Checker | Multi-source claim verification | | 9 | Web Search Import | Progressive knowledge building from web search | | 10 | NER Entity Extraction | spaCy NER pipeline for entity extraction | | 11 | Semantic Web | JSON-LD import/export for linked data | | 12 | Codebase Understanding | AST analysis for codebase knowledge graphs | | 13 | Intel Analyst | OSINT intelligence dashboard with real-time ingest and gap detection |
Built with Engram
| Project | Description | |---------|-------------| | Intel Analyst | OSINT intelligence dashboard powered by engram's knowledge graph, ingest pipeline, and gap detection engine |
License
Engram is free for personal use, research, education, and non-profit organizations.
Commercial use requires a paid license. Contact sven.andreas@gmail.com for commercial licensing.
See LICENSE for full terms.
Related Skills
himalaya
354.3kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
node-connect
354.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
taskflow
354.3kUse when work should span one or more detached tasks but still behave like one job with a single owner context. TaskFlow is the durable flow substrate under authoring layers like Lobster, ACPX, plugins, or plain code. Keep conditional logic in the caller; use TaskFlow for flow identity, child-task linkage, waiting state, revision-checked mutations, and user-facing emergence.
frontend-design
112.3kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
