Feoxdb
FeOx (Iron-Oxide) is an ultra-fast, embedded and persisted KV store in pure Rust.
Install / Use
/learn @mehrantsi/FeoxdbREADME
Ultra-fast, embedded key-value database for Rust with sub-microsecond latency.
</div>Documentation | Benchmarks | Issues
Looking for a Redis-compatible server? Check out feox-server - a Redis protocol-compatible server built on Feox DB.
Features
- Sub-Microsecond Latency: <200ns GET, <700ns INSERT operations
- Lock-Free Concurrency: Built on SCC HashMap and Crossbeam SkipList
- io_uring Support (Linux): Kernel-bypass I/O for maximum throughput with minimal syscalls
- Flexible Storage: Memory-only or persistent modes with async I/O
- JSON Patch Support: RFC 6902 compliant partial updates for JSON values
- Atomic Operations: Compare-and-Swap (CAS) and atomic counters
- Write Buffering: Sharded buffers with batched writes to reduce contention
- CLOCK Cache: Second-chance eviction algorithm
- Statistics: Real-time performance monitoring
- Free Space Management: Dual RB-tree structure for O(log n) allocation
- Zero Fragmentation: Automatic coalescing prevents disk fragmentation
ACID Properties and Durability
FeOxDB provides ACI properties with relaxed durability:
- Atomicity: ✅ Individual operations are atomic via Arc-wrapped records
- Consistency: ✅ Timestamp-based conflict resolution ensures consistency
- Isolation: ✅ Lock-free reads and sharded writes provide operation isolation
- Durability: ⚠️ Write-behind logging with bounded data loss window
Durability Trade-offs
FeOxDB trades full durability for extreme performance:
- Write-behind buffering: Flushes every 100ms or when buffers fill (1024 entries or 16MB per shard)
- Worst-case data loss:
- Time window:
100ms + 16MB / 4KB_random_write_QD1_throughput - Data at risk:
16MB × num_shards (num_shards = num_cpus / 2)(e.g., 64MB for 4 shards, 128MB for 8 shards) - Workers write in parallel, so time doesn't multiply with shards
- Example (50MB/s 4KB random QD1): 420ms window, up to 64MB at risk (4 shards)
- Example (200MB/s 4KB random QD1): 180ms window, up to 64MB at risk (4 shards)
- Time window:
- Memory-only mode: No durability, maximum performance
- Explicit flush: Call
store.flush()to synchronously write all buffered data (blocks until fsync completes)
FAQ:
Q: Would the durability tradeoff for extreme performance worth it?
- For KV stores, there are more use cases that can accept this slightly relaxed durability model than not. of course this isn't the case for a main DB, but KV stores often handle derived data, caches, or state that can be rebuilt.
That said, for cases needing stronger durability, you can call
store.flush()after critical operations - gives you fsync-level guarantees. The philosophy is: make the fast path really fast for those who need it, but provide escape hatches for stronger guarantees when needed.
Q: What kind of applications would need this performance? Why these latency numbers matter?
-
The real value isn't just raw speed - it's efficiency. When operations complete in 200ns instead of blocking for microseconds/milliseconds on fsync, you avoid thread pool exhaustion and connection queueing. Each sync operation blocks that thread until disk confirms - tying up memory, connection slots, and causing tail latency spikes.
With FeOxDB's write-behind approach:
- Operations return immediately, threads stay available
- Background workers batch writes, amortizing sync costs across many operations
- Same hardware can handle 100x more concurrent requests
- Lower cloud bills from needing fewer instances
For desktop apps, this means your KV store doesn't tie up threads that the UI needs. For servers, it means handling more users without scaling up. The durability tradeoff makes sense when you realize most KV workloads are derived data that can be rebuilt. Why block threads and exhaust IOPS for fsync-level durability on data that doesn't need it?
Quick Start
Installation
[dependencies]
feoxdb = "0.1.0"
Basic Usage
use feoxdb::FeoxStore;
fn main() -> feoxdb::Result<()> {
// Create an in-memory store
let store = FeoxStore::new(None)?;
// Insert a key-value pair
store.insert(b"user:123", b"{\"name\":\"Mehran\"}")?;
// Get a value
let value = store.get(b"user:123")?;
println!("Value: {}", String::from_utf8_lossy(&value));
// Check existence
if store.contains_key(b"user:123") {
println!("Key exists!");
}
// Delete a key
store.delete(b"user:123")?;
Ok(())
}
Persistent Storage
use feoxdb::FeoxStore;
fn main() -> feoxdb::Result<()> {
// Create a persistent store
let store = FeoxStore::new(Some("/path/to/data.feox".to_string()))?;
// Operations are automatically persisted
store.insert(b"config:app", b"production")?;
// Flush to disk
store.flush();
// Data survives restarts
drop(store);
let store = FeoxStore::new(Some("/path/to/data.feox".to_string()))?;
let value = store.get(b"config:app")?;
assert_eq!(value, b"production");
Ok(())
}
Advanced Configuration
use feoxdb::FeoxStore;
fn main() -> feoxdb::Result<()> {
let store = FeoxStore::builder()
.device_path("/data/myapp.feox")
.file_size(10 * 1024 * 1024 * 1024) // 10GB initial file size
.max_memory(2_000_000_000) // 2GB limit
.enable_caching(true) // Enable CLOCK cache
.hash_bits(20) // 1M hash buckets
.enable_ttl(true) // Enable TTL support
.build()?;
Ok(())
}
Time-To-Live (TTL) Support
use feoxdb::FeoxStore;
// Enable TTL feature via builder
let store = FeoxStore::builder()
.enable_ttl(true)
.build()?;
// Set key to expire after 60 seconds
store.insert_with_ttl(b"session:123", b"session_data", 60)?;
// Check remaining TTL
if let Some(ttl) = store.get_ttl(b"session:123")? {
println!("Session expires in {} seconds", ttl);
}
// Extend TTL to 120 seconds
store.update_ttl(b"session:123", 120)?;
// Remove TTL (make permanent)
store.persist(b"session:123")?;
Concurrent Access
use feoxdb::FeoxStore;
use std::sync::Arc;
use std::thread;
fn main() -> feoxdb::Result<()> {
let store = Arc::new(FeoxStore::new(None)?);
let mut handles = vec![];
// Spawn 10 threads, each inserting data
for i in 0..10 {
let store_clone = Arc::clone(&store);
handles.push(thread::spawn(move || {
for j in 0..1000 {
let key = format!("thread_{}:key_{}", i, j);
store_clone.insert(key.as_bytes(), b"value", None).unwrap();
}
}));
}
for handle in handles {
handle.join().unwrap();
}
println!("Total keys: {}", store.len()); // 10,000
Ok(())
}
Range Queries
use feoxdb::FeoxStore;
fn main() -> feoxdb::Result<()> {
let store = FeoxStore::new(None)?;
// Insert sorted keys
store.insert(b"user:001", b"Mehran")?;
store.insert(b"user:002", b"Bob")?;
store.insert(b"user:003", b"Charlie")?;
store.insert(b"user:004", b"David")?;
// Range query: get users 001-003 (inclusive on both ends)
let results = store.range_query(b"user:001", b"user:003", 10)?;
for (key, value) in results {
println!("{}: {}",
String::from_utf8_lossy(&key),
String::from_utf8_lossy(&value));
}
// Outputs: user:001, user:002, user:003
Ok(())
}
Compare-and-Swap (CAS) Operations
FeOxDB provides atomic Compare-and-Swap operations for implementing optimistic concurrency control:
use feoxdb::FeoxStore;
use std::sync::{Arc, Barrier};
use std::thread;
fn main() -> feoxdb::Result<()> {
let store = Arc::new(FeoxStore::new(None)?);
// Multiple servers processing orders concurrently
store.insert(b"product:iPhone16:stock", b"50")?; // Initial stock
let barrier = Arc::new(Barrier::new(5));
let mut handles = vec![];
for order_id in 0..5 {
let store_clone = Arc::clone(&store);
let barrier_clone = Arc::clone(&barrier);
handles.push(thread::spawn(move || -> feoxdb::Result<bool> {
barrier_clone.wait(); // Start all orders simultaneously
let quantity_requested = 10;
// Try to reserve inventory atomically
let current = store_clone.get(b"product:iPhone16:stock")?;
let stock: u32 = String::from_utf8_lossy(¤t)
.parse()
.unwrap_or(0);
if stock >= quantity_requested {
let new_stock = (stock - quantity_requested).to_string();
// Attempt atomic update
if store_clone.compare_and_swap(
b"product:iPhone16:stock",
¤t,
new_stock.as_bytes()
)? {
return Ok(true); // Successfully reserved
}
}
Ok(false) // Failed - insufficient stock or lost race
}));
}
let successful_orders: Vec<bool> = handles
.into_iter()
.map(|h| h.join().unwrap().unwrap())
.collect();
// With single-attempt CAS, some orders may fail due to races
// Typically 3-4 orders succeed out of 5
let successful_count = successful_orders.iter().filter(|&&x| x).count();
println!("Successful orders: {}/5", successful_count);
let final_stock = store.get(b"product:iPhone16:stock")?;
println!("Final stock:
