SkillAgentSearch skills...

Lux

A Redis-compatable key-value store. Up to 10x faster. Native vector support.

Install / Use

/learn @lux-db/Lux
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="logo.png" alt="Lux" width="120" height="120" /> </p> <h1 align="center">Lux</h1> <p align="center"> <strong>A Redis-compatible key-value store. Up to 10x faster.</strong><br/> Multi-threaded. Built-in vector search, time series, realtime key subscriptions, and GEO. BullMQ-compatible. Written in Rust. MIT licensed forever. </p> <p align="center"> <a href="https://github.com/lux-db/lux/actions/workflows/test.yml"><img src="https://github.com/lux-db/lux/actions/workflows/test.yml/badge.svg" alt="Tests" /></a> <a href="https://github.com/lux-db/lux/releases/latest"><img src="https://img.shields.io/github/v/release/lux-db/lux" alt="Release" /></a> <a href="https://github.com/lux-db/lux/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="MIT License" /></a> </p> <p align="center"> <a href="https://luxdb.dev">Lux Cloud</a> &middot; <a href="https://luxdb.dev/vs/redis">Benchmarks</a> &middot; <a href="https://luxdb.dev/architecture">Architecture</a> </p>

Why Lux?

Redis is single-threaded by design. Antirez made that choice in 2009 because it eliminates all locking, race conditions, and concurrency bugs. For most workloads, the bottleneck is network I/O, not CPU, so a single-threaded event loop is fast enough. It was a brilliant simplification.

But it has a ceiling. Once you saturate one core, that's it. Redis can't use the other 15 cores on your machine. The official answer is to run multiple Redis instances and shard at the client level (Redis Cluster), which adds significant operational complexity.

Lux takes the opposite approach: a sharded concurrent architecture that safely uses all your cores in a single process. Each key maps to one of N shards, each protected by a parking_lot RwLock. Reads never block reads. Writes only block the single shard they touch. Tokio's async runtime handles thousands of connections across all cores. The result: single-digit microsecond latency at low concurrency (matching Redis), and linear throughput scaling as you add cores and pipeline depth (where Redis flatlines).

"Doesn't multi-threading introduce the bugs Redis avoided?" No. Lux's concurrency is at the shard level, not the command level. Each command acquires a single shard lock, does its work, and releases. There are no cross-shard locks, no lock ordering issues, no deadlocks. The only shared mutable state is inside shards, and the RwLock makes that safe. MULTI/EXEC transactions use WATCH-based optimistic concurrency (shard versioning) rather than global locks, matching what Redis clients actually rely on.

Point your existing Redis client at Lux. Most workloads just work.

Works with every Redis client -- ioredis, redis-py, go-redis, Jedis, redis-rb, BullMQ. Zero code changes.

Benchmarks

redis-benchmark, 50 clients, 1M requests, pipeline=64. Sequential runs (one server at a time) on a 32-core Intel i9-14900K, 128GB RAM, Ubuntu 24.04.

| Command | Lux | Redis 8.4.2 | Lux/Redis | |---------|-----|-------------|-----------| | SET | 10.2M | 3.4M | 3.0x | | GET | 12.0M | 4.7M | 2.6x | | INCR | 6.3M | 4.0M | 1.6x | | LPUSH | 6.5M | 3.3M | 2.0x | | RPUSH | 6.4M | 3.7M | 1.7x | | LPOP | 11.6M | 3.0M | 3.9x | | RPOP | 11.1M | 3.3M | 3.4x | | SADD | 7.2M | 4.1M | 1.8x | | HSET | 6.8M | 3.3M | 2.0x | | SPOP | 12.2M | 4.5M | 2.7x | | ZADD | 7.0M | 3.1M | 2.3x | | ZPOPMIN | 11.5M | 5.3M | 2.2x | | GEOPOS | 5.26M | 2.60M | 2.0x | | GEODIST | 6.67M | 2.53M | 2.6x | | GEOSEARCH (500km) | 4.44M | 559K | 8.0x | | GEOSEARCH (5000km) | 200K | 20K | 10.0x |

Lux beats Redis on every supported command. At pipeline=1, both are network-bound and roughly equal. The gap grows with pipeline depth because Lux batches same-shard commands under a single lock while Redis processes sequentially on one core. GEO commands see the biggest gains because GEOSEARCH parallelizes across shards while Redis scans single-threaded.

Full results including SET scaling by pipeline depth (up to 5.8x at pipeline=512) in BENCHMARKS.md. Reproduce with ./bench.sh.

Lux Cloud

Don't want to manage infrastructure? Lux Cloud is managed Lux hosting. Deploy in seconds, connect with any Redis client. Includes BullMQ queue dashboard, agent memory MCP server, persistence, monitoring, and web console.

Features

  • 200+ commands -- strings, lists, hashes, sets, sorted sets, streams, vectors, geo, time series, tables, HyperLogLog, bitops, pub/sub, transactions
  • Relational tables -- TCREATE, TINSERT, TQUERY, TALTER with typed fields (str, int, float, bool, timestamp), unique constraints, foreign keys, joins, WHERE/ORDER BY/LIMIT. Structured data without standing up Postgres
  • Realtime key subscriptions -- KSUB/KUNSUB: subscribe to key patterns, receive events when matching keys are mutated. Zero overhead when unused. No global config flags, no separate services. Unlike Redis keyspace notifications which tax every write globally, KSUB is surgical and async
  • Native time series -- TSADD, TSGET, TSRANGE, TSMRANGE with aggregation (avg, sum, min, max, count, std), retention policies, and label-based filtering. No modules, no sidecars. TSGET 4x faster than Redis GET
  • Native vector search -- VSET, VGET, VSEARCH with cosine similarity and metadata filtering. No extensions, no sidecars
  • GEO commands -- GEOADD, GEOSEARCH, GEODIST, GEOPOS, GEOHASH, GEORADIUS with up to 10x faster spatial queries
  • LRU eviction -- maxmemory with allkeys-lru, volatile-lru, allkeys-random, volatile-random policies
  • BullMQ compatible -- blocking commands, streams, Lua scripting with cmsgpack/cjson
  • Lua scripting -- EVAL, EVALSHA, SCRIPT with redis.call/pcall, cmsgpack, and cjson
  • Redis Streams -- XADD, XREAD, XREADGROUP, XACK, consumer groups, blocking reads
  • Blocking commands -- BLPOP, BRPOP, BLMOVE, BZPOPMIN, BZPOPMAX
  • HTTP REST API -- built-in JSON API on a separate port, 174K ops/sec, Bearer auth, CORS
  • RESP2 protocol -- compatible with every Redis client
  • Multi-threaded -- auto-tuned shards, parking_lot RwLocks, tokio async runtime
  • Zero-copy parser -- RESP arguments are byte slices into the read buffer
  • Pipeline batching -- consecutive same-shard commands batched under a single lock
  • Persistence -- automatic snapshots, write-ahead log (WAL) with CRC32 checksums, tiered hot/cold storage with automatic eviction to disk
  • Auth -- password authentication via LUX_PASSWORD
  • Pub/Sub -- SUBSCRIBE, PSUBSCRIBE, PUBLISH, plus KSUB/KUNSUB for realtime key change events
  • TTL support -- EX, PX, EXPIRE, PEXPIRE, PERSIST, TTL, PTTL
  • MIT licensed -- no license rug-pulls, unlike Redis (RSALv2/SSPL)

Quick Start

cargo build --release
./target/release/lux

Lux starts on 0.0.0.0:6379 by default. Connect with any Redis client using lux:// or redis://:

Protocol note: lux:// is the primary protocol for the Lux SDK and luxctl CLI. When using third-party Redis clients (ioredis, redis-py, go-redis) directly, use redis:// since they don't recognize lux://. Both connect to the same server.

redis-cli
> SET hello world
OK
> GET hello
"world"

Docker

docker run -d -p 6379:6379 ghcr.io/lux-db/lux:latest

Docker Compose

docker compose up -d        # start
docker compose up -d --build  # rebuild & start
docker compose down         # stop

Vector Search

Lux has native vector storage and cosine similarity search. No extensions, no sidecars, no separate services.

# Store vectors with optional metadata
redis-cli VSET doc:1 3 0.1 0.2 0.3 META '{"title":"hello world"}'
redis-cli VSET doc:2 3 0.9 0.1 0.0 META '{"title":"another doc"}'

# Find the 5 nearest neighbors
redis-cli VSEARCH 3 0.1 0.2 0.3 K 5

# Search with metadata filtering
redis-cli VSEARCH 3 0.1 0.2 0.3 K 5 FILTER title "hello world" META

# Count vectors
redis-cli VCARD

Sub-millisecond search at 10,000 vectors with HNSW indexing. Built for AI agent memory, RAG, and semantic search.

Time Series

Built-in time series with retention policies, label-based filtering, and aggregation. No modules required.

# Add samples with labels
redis-cli TSADD cpu:host1 '*' 72.5 RETENTION 86400000 LABELS host server1 metric cpu
redis-cli TSADD cpu:host1 '*' 75.0
redis-cli TSADD cpu:host1 '*' 68.2

# Get latest sample
redis-cli TSGET cpu:host1

# Query range with aggregation (1-hour average)
redis-cli TSRANGE cpu:host1 - + AGGREGATION avg 3600000

# Query across all series matching labels
redis-cli TSMRANGE - + FILTER host=server1

# Batch insert across multiple series
redis-cli TSMADD cpu:host1 '*' 72.5 mem:host1 '*' 45.0 disk:host1 '*' 82.1

TSGET runs at 18M ops/sec at high pipeline. Supports avg, sum, min, max, count, first, last, range, std.p, std.s, var.p, var.s aggregation functions.

Realtime Key Subscriptions (KSUB)

Subscribe to key mutation events by pattern. When any client writes to a matching key, subscribers receive a realtime notification with the key name and operation. No polling, no keyspace notification config, no separate service.

# Client A: subscribe to all user key mutations
redis-cli
> KSUB user:*

# Client B: write some data
redis-cli
> SET user:1 alice
> HSET user:2 name bob
> DEL user:1

# Client A receives:
# ["kmessage", "user:*", "user:1", "set"]
# ["kmessage", "user:*", "user:2", "hset"]
# ["kmessage", "user:*", "user:1", "del"]

Events are ["kmessage", pattern, key, operation]. Operations are lowercase command names: set, del, lpush, hset, zadd, tsadd, etc.

How it differs from Redis keyspace notifications:

  • Redis requires a global notify-keyspace-events config flag that adds overhead to every write, even if nobody is listening
  • KSUB has zero overhead when no subscribers exist (single atomic c

Related Skills

View on GitHub
GitHub Stars234
CategoryData
Updated1h ago
Forks16

Languages

Rust

Security Score

100/100

Audited on Apr 6, 2026

No findings