Goraphdb
A graph database provides Cypher query, fluent builder and management UI implemented in Golang.
Install / Use
/learn @mstrYoda/GoraphdbREADME
GraphDB
A high-performance, embeddable graph database written in Go. Built on top of bbolt (B+tree key-value store), it supports concurrent queries, secondary indexes, optional hash-based sharding, and a subset of the Cypher query language — all in a single dependency-free binary.

Features
- Directed labeled graph — nodes and edges with arbitrary JSON-like properties
alice ---follows---> bob,server ---consumes---> queue - Node labels — first-class
:Person,:Movielabels with dedicated index and Cypher support (MATCH (n:Person)) - Concurrent reads — fully parallel BFS, DFS, Cypher, and query-builder calls via MVCC
- 50 GB+ ready — bbolt memory-mapped storage with configurable
MmapSize - Graph algorithms — BFS, DFS, Shortest Path (unweighted & Dijkstra), All Paths, Connected Components, Topological Sort
- Fluent query builder — chainable Go API with filtering, pagination, and direction control
- Secondary indexes — O(log N) property lookups with auto-maintenance on single writes
- Composite indexes — multi-property indexes for fast compound lookups (
CreateCompositeIndex("city", "age")) - Unique constraints —
CreateUniqueConstraint(label, property)enforces value uniqueness across nodes with the same label; O(1) lookup via dedicatedidx_uniquebucket; WAL-replicated to followers - Bloom filter for HasEdge() — in-memory probabilistic filter (~1.5 % false positive rate, zero false negatives) avoids disk I/O when edges definitely don't exist; rebuilt from
adj_outon startup;graphdb_bloom_negatives_totalPrometheus counter tracks savings - Cypher query language — read and write support with index-aware execution, LIMIT push-down, ORDER BY + LIMIT heap, query plan caching,
OPTIONAL MATCH,EXPLAIN/PROFILE, parameterized queries,CREATEfor inserting nodes/edges,MERGEfor upsert (match-or-create) semantics withON CREATE SET/ON MATCH SET,MATCH...SETfor property updates,MATCH...DELETEfor node removal,SKIPfor pagination - Query timeout —
CypherContext/CypherWithParamsContextacceptcontext.Contextfor deadline-based cancellation at scan loop boundaries - Transactions —
Begin/Commit/RollbackAPI for multi-statement atomic operations with read-your-writes semantics - EXPLAIN / PROFILE — query plan tree with operator types;
PROFILEadds per-operator row counts and wall-clock timing - OPTIONAL MATCH — left-outer-join semantics for graph patterns (unmatched bindings become
nil) - Byte-budgeted node cache — sharded concurrent LRU cache with memory-based eviction (default 128 MB); predictable memory footprint regardless of node sizes
- Data integrity — CRC32 (Castagnoli) checksums on all node/edge data, verified on every read, with a
VerifyIntegrity()full scan - Binary encoding — MessagePack property serialization (3–5× faster, 30–50% smaller than JSON) with backward-compatible format detection
- Structured logging —
log/slogintegration for all write operations, errors, and lifecycle events - Parameterized queries —
$paramtokens in Cypher for safe substitution and plan reuse - Prepared statement caching — bounded LRU query cache (10K entries) with
PrepareCypher/ExecutePrepared/ExecutePreparedWithParamsAPI and server-side/api/cypher/prepare+/api/cypher/executeendpoints - Streaming results —
CypherStream()returns a lazyRowIteratorfor O(1) memory on non-sorted queries; NDJSON streaming viaPOST /api/cypher/stream - Slow query log — configurable threshold (default 100ms); queries exceeding the threshold are logged at WARN level with duration, row count, and truncated query text
- Cursor pagination — O(limit) cursor-based
ListNodes/ListEdges/ListNodesByLabelAPIs; no offset scanning. Server endpoints:GET /api/nodes/cursor,GET /api/edges/cursor - Prometheus metrics — dependency-free atomic counters with Prometheus text exposition at
GET /metrics; tracks queries, slow queries, cache hits/misses, node/edge CRUD, index lookups, and live gauges - Batch operations —
AddNodeBatch/AddEdgeBatchfor bulk loading with single-fsync transactions - Worker pool — built-in goroutine pool for concurrent query execution
- Optional sharding — hash-based partitioning across multiple bbolt files; edges co-located with source nodes for single-shard traversals
- Single-leader replication — WAL-based log shipping over gRPC with automatic leader election (hashicorp/raft), follower Applier, and exponential backoff reconnect; WAL group commit (batched fsync) for high write throughput
- Transparent write forwarding — followers automatically forward writes to the leader via HTTP; clients can connect to any node
- Health check endpoint —
GET /api/healthreturns role-aware status (leader/follower/standalone) for load balancer integration - Cluster status endpoint —
GET /api/clusterexposes node ID, role, leader ID, and cluster topology - Cluster dashboard — React UI page showing per-node stats, role indicators, replication progress bars, and health status with 5-second auto-refresh; aggregator endpoint proxies to all peers
- Graceful shutdown —
SIGTERM/SIGINTsignal handler with ordered teardown: HTTP drain (10 s) → Raft/gRPC stop → WAL flush → bbolt close; safe for Kubernetes pod termination andCtrl+C - Management UI — built-in web console with a Cypher query editor, interactive graph visualization (cytoscape.js), index management, and a node/edge explorer
Installation
go get github.com/mstrYoda/goraphdb
Quick Start
package main
import (
"fmt"
"log"
graphdb "github.com/mstrYoda/goraphdb"
)
func main() {
// Open (or create) a database.
db, err := graphdb.Open("./my.db", graphdb.DefaultOptions())
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Add nodes with arbitrary properties.
alice, _ := db.AddNode(graphdb.Props{"name": "Alice", "age": 30})
bob, _ := db.AddNode(graphdb.Props{"name": "Bob", "age": 25})
// Add a directed labeled edge.
db.AddEdge(alice, bob, "follows", graphdb.Props{"since": "2024"})
// Query neighbors.
neighbors, _ := db.NeighborsLabeled(alice, "follows")
for _, n := range neighbors {
fmt.Println(n.GetString("name")) // Bob
}
// BFS traversal.
results, _ := db.BFSCollect(alice, 3, graphdb.Outgoing)
for _, r := range results {
fmt.Printf("depth=%d %s\n", r.Depth, r.Node.GetString("name"))
}
// Cypher query.
ctx := context.Background()
res, _ := db.Cypher(ctx, `MATCH (a {name: "Alice"})-[:follows]->(b) RETURN b.name`)
for _, row := range res.Rows {
fmt.Println(row["b.name"]) // Bob
}
}
Configuration
opts := graphdb.Options{
ShardCount: 1, // 1 = single process (default), N = hash-sharded
WorkerPoolSize: 8, // goroutines for concurrent query execution
CacheBudget: 128 * 1024 * 1024, // 128 MB byte-budget LRU cache for hot nodes
SlowQueryThreshold: 100 * time.Millisecond, // log queries slower than this (0 = disabled)
NoSync: false, // true = skip fsync (faster writes, risk of data loss)
ReadOnly: false, // open in read-only mode
MmapSize: 256 * 1024 * 1024, // 256 MB initial mmap
}
db, err := graphdb.Open("./data", opts)
Use graphdb.DefaultOptions() for sensible defaults tuned for ~50 GB datasets.
API Reference
Node Operations
// Create
id, err := db.AddNode(graphdb.Props{"name": "Alice", "age": 30})
ids, err := db.AddNodeBatch([]graphdb.Props{...}) // bulk insert (single tx)
// Read
node, err := db.GetNode(id)
name := node.GetString("name") // "Alice"
age := node.GetFloat("age") // 30.0
exists, err := db.NodeExists(id)
count := db.NodeCount()
// Update
err = db.UpdateNode(id, graphdb.Props{"age": 31}) // merge
err = db.SetNodeProps(id, graphdb.Props{"name": "A"}) // full replace
// Delete
err = db.DeleteNode(id) // also removes all connected edges
// Scan & Filter
nodes, err := db.FindNodes(func(n *graphdb.Node) bool {
return n.GetFloat("age") > 25
})
err = db.ForEachNode(func(n *graphdb.Node) error {
fmt.Println(n.Props)
return nil
})
Node Labels
// Create a node with labels
id, err := db.AddNodeWithLabels([]string{"Person", "Employee"}, graphdb.Props{"name": "Alice"})
// Add / remove labels on existing nodes
err = db.AddLabel(id, "Admin")
err = db.RemoveLabel(id, "Employee")
// Query labels
labels, err := db.GetLabels(id) // ["Person", "Admin"]
has, err := db.HasLabel(id, "Person") // true
// Find all nodes with a label (index-backed)
people, err := db.FindByLabel("Person")
Transactions
// Multi-statement atomic operations with read-your-writes semantics.
tx, err := db.Begin()
alice, _ := tx.AddNode(graphdb.Props{"name": "Alice"})
bob, _ := tx.AddNode(graphdb.Props{"name": "Bob"})
tx.AddEdge(alice, bob, "follows", nil)
// Read uncommitted data within the same transaction.
node, _ := tx.GetNode(alice) // visible before commit
err = tx.Commit() // atomically persists all changes
// — or —
err = tx.Rollback() // discards all changes
Edge Operations
// Create — alice ---follows---> bob
edgeID, err := db.AddEdge(alice, bob, "follows", graphdb.Props{"since": "2024"})
ids, err := db.AddEdgeBatch([]graphdb.Edge{...})
// Read
edge, err := db.GetEdge(edgeID)
outEdges, err := db.OutEdges(alice) // all outgoing
inEdges, err := db.InEdges(bob) // all incoming
allEdges, err := db.Edges(alice) // both directions
labeled, err := db.OutEdgesLabeled(a
Related Skills
xurl
344.4kA CLI tool for making authenticated requests to the X (Twitter) API. Use this skill when you need to post tweets, reply, quote, search, read posts, manage followers, send DMs, upload media, or interact with any X API v2 endpoint.
notion
344.4kNotion API for creating and managing pages, databases, and blocks.
feishu-drive
344.4k|
things-mac
344.4kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
