SkillAgentSearch skills...

ArqonDB

AI-native distributed database for agent memory and real-time state. Unifies KV, vector search, and temporal graph in a single Rust engine with Raft consensus

Install / Use

/learn @AlbericByte/ArqonDB
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

ArqonDB

Build License Stars Rust

AI-native distributed database built from scratch in Rust. ArqonDB unifies key-value storage, vector search (DiskHNSW / SPFresh, PQ-encoded), and temporal graph traversal in a single engine — powered by Raft consensus, LSM-tree compaction, and a sharded metadata plane.

Why ArqonDB

  • Unified engine — KV, vector, and temporal graph in one process. No glue code between three separate systems.
  • 6x faster writes than RocksDB on single-node benchmarks with WAL durability.
  • Built for AI agents — causal graph, reactive state, and CAS primitives designed for agent memory and planning.
  • Pure Rust, zero C++ deps — single static binary, no JNI, no CGO.
  • Production topology — Raft consensus, sharded metadata, stateless gateway, Redis RESP2 compatible.

Highlights

| | | |---|---| | Storage | LSM-tree with leveled compaction, MVCC, bloom filters, sharded block cache | | Vector | DiskHNSW / SPFresh with PQ encoding, distributed fan-out search | | Graph | Temporal edge traversal (BFS), GraphSST with temperature-based zoning | | Consensus | Per-shard Raft groups + separate metadata Raft plane | | Interfaces | gRPC, Redis RESP2, REST management API, React UI | | SDKs | Python, Java, Rust, Go, C++, Node.js |


Performance

ArqonDB matches or outperforms RocksDB on all single-node benchmarks. Both use page-cache WAL durability (sync=false) — ArqonDB reuses its Raft log double-buffer WAL engine for standalone mode.

| Benchmark | ArqonDB | RocksDB | Ratio | |-----------|----------|---------|-------| | Sequential write (10K keys) | 5.29 ms | 33.99 ms | 6.4x faster | | Sequential read (10K keys) | 4.20 ms | 9.56 ms | 2.3x faster | | Random read (10K keys) | 5.40 ms | 9.01 ms | 1.7x faster | | Sequential write + flush (100K x 1KB) | 105.40 ms | 462.95 ms | 4.4x faster |

cargo bench --bench kv_benchmark

Demo

demo


Architecture Overview

┌─────────────────────────────────────────────────────────────┐
│                        Clients                              │
└───────────────────────┬─────────────────────────────────────┘
                        │ gRPC
                        ▼
┌─────────────────────────────────────────────────────────────┐
│                    Gateway (stateless)                      │
│         shard-map cache + leader retry + vector merge       │
└───────────────────────┬─────────────────────────────────────┘
                        │
          ┌─────────────┴──────────────┐
          │                            │
          ▼                            ▼
┌──────────────────┐        ┌──────────────────────────────────┐
│  Metadata Plane  │        │       Data Plane                 │
│  (arqondb-meta) │        │  (arqondb + data-node)          │
│                  │        │                                  │
│  Raft group      │        │  ShardEngine per node            │
│  MetadataState   │        │  LSM-tree per shard              │
│  ShardMap        │        │  HNSW + PQ vector index          │
│                  │        │  Raft per shard group            │
└──────────────────┘        └──────────────────────────────────┘

Three Binaries

| Binary | Feature Flag | Role | |---|---|---| | metadata_service | (none) | Standalone metadata Raft group | | raft_engine | data-node | Data node: ShardEngine + gRPC KV server | | gateway | (none) | Stateless routing gateway + management UI |


Component Map

src/
├── engine/
│   ├── mem/          # MemTable: skip-list backed, MVCC-ordered
│   ├── sst/          # SST files: data blocks, index blocks, bloom filters
│   ├── wal/          # Write-ahead log: record framing + CRC
│   ├── version/      # VersionSet: LSM level management, compaction
│   ├── background/   # Background compaction and flush tasks
│   ├── vector/       # HNSW + PQ vector index: ANN search per shard
│   └── shard/        # ShardEngine: maps metadata events → local LSM shards
│
├── raft/
│   ├── node.rs       # RaftNode (public handle) + RaftCore (event loop)
│   ├── log.rs        # RaftLog: 1-indexed, sentinel at [0]
│   ├── state.rs      # RaftRole, RaftState transitions
│   └── transport.rs  # Lazy gRPC connections to peers
│
├── metadata/
│   ├── state.rs      # MetadataState: shards, CFs, node registry
│   ├── op.rs         # MetadataOp variants (CreateShard, RegisterNode, …)
│   ├── manager.rs    # MetadataManager: Raft-backed metadata
│   ├── provider.rs   # MetadataProvider trait (local vs remote)
│   └── router.rs     # ShardRouter: (cf, key) → ShardInfo
│
├── network/
│   ├── grpc_service.rs       # KV gRPC service (GrpcKvService + GrpcShardKvService)
│   ├── redis_service.rs      # Redis-compatible TCP server (RESP2 protocol)
│   ├── raft_service.rs       # Raft RPC handler
│   ├── metadata_service.rs   # Metadata gRPC service
│   ├── metadata_client.rs    # MetadataClient (remote MetadataProvider)
│   └── gateway_service.rs    # Stateless routing gateway
│
└── db/
    └── db_impl.rs    # DBImpl: write group, WAL, memtable, compaction

Getting Started

Prerequisites

  • Rust 1.85+ (rustup update stable)
  • protoc is not requiredprotoc-bin-vendored bundles a prebuilt binary

Build

# Library + metadata + gateway binaries
cargo build

# Data node (requires data-node feature)
cargo build --features data-node --bin raft_engine

# All binaries
cargo build --features data-node

# Build the web UI
cd src/ui && npm install && npm run build

Test

# All tests (~920 tests)
cargo test

# Integration tests (20 tests)
cargo test --test integration_test

Redis Protocol

ArqonDB includes a Redis-compatible TCP server (RedisServer) that speaks RESP2 — the same wire protocol used by Redis itself. Any existing Redis client library or redis-cli can connect without modification.

Architecture

RedisServer is generic over the KvOps trait, so it plugs into two different positions:

Option A — inside the Gateway (recommended for production):

  redis-cli ──RESP2──► RedisServer(GatewayService)
                              │
                    metadata shard lookup
                              │
                    ┌─────────▼──────────┐
                    │  data node (leader) │
                    └────────────────────┘

Option B — on a single data node (simple / dev):

  redis-cli ──RESP2──► RedisServer(KvService) ──► local LSM-tree

In Option A the Redis client gets exactly the same routing, leader-retry, and fault-tolerance as gRPC clients — there is no extra hop or intermediate service.

Supported commands

String / key commands

| Command | Description | |---------|-------------| | SET key value [EX s\|PX ms\|EXAT ts\|PXAT ts\|KEEPTTL] [NX\|XX] [GET] | Store a key/value pair with optional TTL and conditional semantics | | GET key | Get value, or (nil) if absent or expired | | MSET key value [key value …] | Set multiple keys | | MGET key [key …] | Get multiple values (array reply) | | GETDEL key | Get value then delete the key | | STRLEN key | Length of stored value (0 if absent) | | APPEND key value | Merge value into key (append-style merge) | | EXISTS key [key …] | Count how many of the given keys exist (expired keys not counted) | | DEL key [key …] | Delete keys; returns count deleted | | TYPE key | Returns "string" or "none" |

TTL / expiry commands

| Command | Description | |---------|-------------| | EXPIRE key seconds | Set expiry in seconds; returns 1 if set, 0 if key not found | | PEXPIRE key milliseconds | Set expiry in milliseconds | | EXPIREAT key unix-time-seconds | Set absolute expiry (Unix timestamp in seconds) | | PEXPIREAT key unix-time-ms | Set absolute expiry (Unix timestamp in milliseconds) | | TTL key | Remaining seconds; -1 = no expiry, -2 = key not found | | PTTL key | Remaining milliseconds; -1 = no expiry, -2 = key not found | | PERSIST key | Remove expiry; returns 1 if removed, 0 if no expiry / no key |

Expiry is enforced lazily on reads: expired keys are transparently deleted when accessed and return (nil) / 0 / "none" as appropriate.

TTL metadata is stored in an internal column family (CF 1) so it survives restarts and is replicated through Raft like any other write.

Connection commands

| Command | Description | |---------|-------------| | PING [message] | Returns PONG (or echoes message) | | ECHO message | Echo the message back | | QUIT | Close the connection | | SELECT db | No-op (only SELECT 0 accepted) |

Server info commands

| Command | Description | |---------|-------------| | DBSIZE | Returns 0 (full scan not yet implemented) | | INFO [section] | Returns basic server info | | COMMAND COUNT | Returns number of supported commands | | COMMAND DOCS / INFO | Empty array (compatibility shim) | | FLUSHDB / FLUSHALL | Returns -ERR (destructive; not supported) |

Explicitly unsupported (returns descriptive -ERR)

| Category | Commands | |----------|----------| | Atomic ops | INCR, DECR, SETNX, GETSET, … | | Lists | LPUSH, RPUSH, LRANGE, … | | Hashes | HSET, HGET, HMGET, … | | Sets | SADD, SMEMBERS, … | | Sorted sets | ZADD, ZRANGE, … | | Pub/Sub | SUBSCRIBE, PUBLISH, … | | Transactions | MULTI, EXEC, … | | Scripting | EVAL, EVALSHA, … | | Key iteration | KEYS, SCAN |

All key commands operate on `USE

Related Skills

View on GitHub
GitHub Stars5
CategoryData
Updated1d ago
Forks1

Languages

Rust

Security Score

90/100

Audited on Apr 7, 2026

No findings