SkillAgentSearch skills...

VelesDB

VelesDB is a local‑first AI data engine written in Rust that unifies vectors, full‑text and graph in a single file with a familiar SQL‑like language. Instead of sending every RAG or semantic search query to a remote cluster, VelesDB runs directly on your server, laptop, browser, mobile or edge device — no cloud dependency, no external services, ..

Install / Use

/learn @cyberlife-coder/VelesDB

README

<p align="center"> <img src="velesdb_icon_pack/favicon/android-chrome-512x512.png" alt="VelesDB Logo" width="200"/> </p> <h1 align="center"> <img src="velesdb_icon_pack/favicon/favicon-32x32.png" alt="VelesDB" width="32" height="32" style="vertical-align: middle;"/> </h1> <h3 align="center"> The Local Knowledge Engine for AI Agents </h3> <p align="center"> <strong>One 6 MB binary. Three engines. One query language.</strong><br/> <em>Vector + Graph + ColumnStore &mdash; unified under VelesQL</em> </p> <p align="center"> <a href="https://github.com/cyberlife-coder/VelesDB/actions/workflows/ci.yml"><img src="https://github.com/cyberlife-coder/VelesDB/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://app.codacy.com/gh/cyberlife-coder/VelesDB/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade"><img src="https://app.codacy.com/project/badge/Grade/58c73832dd294ba38144856ae69e9cf2" alt="Codacy Badge"></a> <a href="https://crates.io/crates/velesdb-core"><img src="https://img.shields.io/crates/v/velesdb-core.svg" alt="Crates.io"></a> <a href="https://crates.io/crates/velesdb-core"><img src="https://img.shields.io/crates/d/velesdb-core.svg" alt="Crates.io Downloads"></a> <a href="https://pypi.org/project/velesdb/"><img src="https://img.shields.io/pypi/v/velesdb.svg" alt="PyPI"></a> <a href="https://www.npmjs.com/package/@wiscale/velesdb-sdk"><img src="https://img.shields.io/npm/v/@wiscale/velesdb-sdk.svg" alt="npm"></a> <img src="https://img.shields.io/badge/coverage-82.3%25-brightgreen" alt="Coverage"> <a href="https://github.com/cyberlife-coder/VelesDB/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-VelesDB_Core_1.0-blue" alt="License"></a> <a href="https://github.com/cyberlife-coder/VelesDB"><img src="https://img.shields.io/github/stars/cyberlife-coder/VelesDB?style=flat-square" alt="Stars"></a> </p> <p align="center"> <a href="https://github.com/cyberlife-coder/VelesDB/releases/tag/v1.8.0">Download v1.8.0</a> &bull; <a href="#getting-started-in-60-seconds">Quick Start</a> &bull; <a href="https://velesdb.com/en/">Documentation</a> &bull; <a href="https://deepwiki.com/cyberlife-coder/VelesDB">DeepWiki</a> </p>

What is VelesDB?

VelesDB is a local-first database for AI agents that fuses three engines into a single 6 MB binary:

| Engine | What it does | Performance | |--------|-------------|-------------| | Vector | Semantic similarity search (HNSW + AVX2/NEON SIMD) | 450us p50 end-to-end (384D, WAL ON, recall>=96%) | | Graph | Knowledge relationships (BFS/DFS, edge properties) | Native MATCH clause | | ColumnStore | Structured metadata filtering (typed columns) | 130x faster than JSON scanning |

All three are queried through VelesQL — a single SQL-like language with vector, graph, and columnar extensions:

MATCH (doc:Document)-[:AUTHORED_BY]->(author:Person)
WHERE similarity(doc.embedding, $question) > 0.8
  AND author.department = 'Engineering'
RETURN author.name, doc.title
ORDER BY similarity() DESC LIMIT 5

Built-in Agent Memory SDK provides semantic, episodic, and procedural memory for AI agents — no external services needed.

One binary. No cloud. No glue code. Runs on server, browser, mobile, and desktop.


Why VelesDB?

| Today (3 systems to maintain) | With VelesDB (1 binary) | |-------------------------------|------------------------| | pgvector for embeddings | Vector Engine — 47us HNSW search (768D) | | Neo4j for knowledge graphs | Graph Engine — MATCH clause, BFS/DFS | | PostgreSQL/DuckDB for metadata | ColumnStore — 130x faster than JSON at 100K rows | | Custom glue code + 3 query languages | VelesQL — one language for everything | | 3 deployments, 3 configs, 3 backups | 6 MB binary — works offline, air-gapped |


Three Engines, One Query

<table align="center"> <tr> <td align="center" width="33%"> <h3>Vector Engine</h3> <p>Native HNSW + AVX-512/AVX2/NEON SIMD<br/><strong>47us search (768D), 19.8ns dot product</strong></p> <p><em>Semantic similarity, embeddings, RAG retrieval</em></p> </td> <td align="center" width="33%"> <h3>Graph Engine</h3> <p>Property graph with BFS/DFS traversal<br/><strong>MATCH clause, edge properties</strong></p> <p><em>Knowledge graphs, citations, co-purchase</em></p> </td> <td align="center" width="33%"> <h3>ColumnStore Engine</h3> <p>Typed columnar storage with bitmap filters<br/><strong>130x faster than JSON at 100K rows</strong></p> <p><em>Metadata filters, reference tables, catalogs</em></p> </td> </tr> </table>

The power is in the fusion. VelesQL combines all three in a single statement:

-- Vector similarity + Graph traversal + ColumnStore filter — ONE query
MATCH (doc:Document)-[:AUTHORED_BY]->(author:Person)
WHERE similarity(doc.embedding, $question) > 0.8
  AND author.department = 'Engineering'
RETURN author.name, doc.title
ORDER BY similarity() DESC
LIMIT 5

Agent Memory SDK

Built-in memory subsystems for AI agents — no external vector DB, no graph DB, no extra dependencies. 99 tests cover the SDK end-to-end.

from velesdb import Database, AgentMemory

db = Database("./agent_data")
memory = AgentMemory(db, dimension=384)

Three Memory Types

| Memory | Purpose | Key methods | |--------|---------|-------------| | Semantic | Long-term knowledge facts | store, query, delete, store_with_ttl | | Episodic | Event timeline with context | record, recent, older_than, recall_similar, delete | | Procedural | Learned patterns & actions | learn, recall, reinforce, list_all, delete |

Semantic Memory — What the agent knows

memory.semantic.store(1, "Paris is the capital of France", embedding)
results = memory.semantic.query(query_embedding, top_k=5)
memory.semantic.delete(1)  # Remove outdated knowledge

Episodic Memory — What happened and when

memory.episodic.record(1, "User asked about geography", int(time.time()), embedding)
events = memory.episodic.recent(limit=10)
old_events = memory.episodic.older_than(cutoff_timestamp, limit=50)
similar = memory.episodic.recall_similar(query_embedding, top_k=5)
memory.episodic.delete(1)

Procedural Memory — What the agent learned to do

memory.procedural.learn(
    procedure_id=1, name="answer_geography",
    steps=["search memory", "retrieve facts", "compose answer"],
    embedding=task_embedding, confidence=0.8
)
matches = memory.procedural.recall(task_embedding, top_k=3, min_confidence=0.5)
all_procedures = memory.procedural.list_all()
memory.procedural.reinforce(procedure_id=1, success=True)   # confidence +0.1
memory.procedural.delete(1)

Advanced features

| Feature | API | |---------|-----| | TTL / Auto-expiration | store_with_ttl(), record_with_ttl(), learn_with_ttl(), auto_expire() | | Snapshots / Rollback | snapshot(), load_latest_snapshot(), list_snapshot_versions() | | Confidence eviction | evict_low_confidence_procedures(min_confidence) | | Reinforcement strategies | FixedRate, AdaptiveLearningRate, TemporalDecay, ContextualReinforcement | | Serialization | serialize() / deserialize() on all memory types |

<details> <summary>Why not SQLite + Vector DB + Graph DB?</summary>

| | VelesDB Agent Memory | SQLite + pgvector + Neo4j | |---|---|---| | Dependencies | 0 (single binary) | 3 separate engines | | Setup | pip install velesdb | Install, configure, connect each | | Semantic search | Native HNSW (sub-ms) | Requires separate vector DB | | Temporal queries | Built-in B-tree index | Manual SQL schema | | Confidence scoring | 4 reinforcement strategies | Build from scratch | | TTL / Auto-expiration | Built-in | Manual cleanup jobs | | Snapshots / Rollback | Versioned with CRC32 | Custom backup logic |

</details>

Full guide: embedding setup, retrieval patterns, TTL, snapshots | Source code


Quick Comparison

| | VelesDB | Chroma | Qdrant | pgvector | |---|---|---|---|---| | Architecture | Unified vector + graph + columnar | Vector only | Vector + payload | Vector extension for PostgreSQL | | Metadata filtering | ColumnStore (130x vs JSON) | JSON scan | JSON payload | SQL (PostgreSQL) | | Deployment | Embedded / Server / WASM / Mobile | Server (Python) | Server (Rust) | Requires PostgreSQL | | Binary size | 6 MB | ~500 MB (with deps) | ~50 MB | N/A (PG extension) | | Search latency | 450us p50 (10K/384D, WAL ON, recall>=96%) | ~1-5ms | ~1-5ms (in-memory) | ~5-20ms | | Graph support | Native (MATCH clause) | No | No | No | | Query language | VelesQL (SQL + NEAR + MATCH) | Python API | JSON API / gRPC | SQL + operators | | Browser (WASM) | Yes | No | No | No | | Mobile (iOS/Android) | Yes | No | No | No | | Offline / Local-first | Yes | Partial | No | No |

Competitor latencies are typical ranges from public benchmarks and vendor documentation. Direct comparison is approximate — architectures differ (embedded vs client-server, durable vs in-memory, recall levels). Run your own benchmarks for accurate comparison.

VelesDB's sweet spot: When you need vector + graph + structured filtering in a single engine, local-first deployment, or a lightweight binary that runs anywhere.

Not the best fit (yet): If you need a managed cloud service with a multi-node distributed cluster.


Getting Started in 60 Seconds

Install

Cargo (Rust):

cargo install velesdb-server velesdb-cli

Python:

pip install velesdb

Docker:

docker run -d -p 8080:8080 -v velesdb_data:/data --name velesdb velesdb/velesdb:latest
<details> <summary>More install options (WASM, Docker Compose, install scripts)</summary>

Docker Compose:

cu
View on GitHub
GitHub Stars33
CategoryData
Updated11h ago
Forks4

Languages

Rust

Security Score

80/100

Audited on Mar 27, 2026

No findings