SkillAgentSearch skills...

Buffdb

Embedded storage built for AI model management over gRPC. Smart machines don't need to read JSON, they only need protocol buffers. The world's first MODMS (Machine-Oriented Database Management System), built to support SQLite and eventually DuckDB as backends.

Install / Use

/learn @buffdb/Buffdb
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

🦁 BuffDB

License: Apache-2.0 Tests Crates.io

BuffDB is a lightweight, high-performance embedded database for model storage with networking capabilities, designed for edge computing and offline-first applications. Built in Rust with <2MB binary size.

⚠️ Experimental: This project is rapidly evolving. If you are trying it out and hit a roadblock, please open an issue.

Key Features

  • High Performance - Optimized for speed with SQLite backend
  • gRPC Network API - Access your database over the network
  • Key-Value Store - Fast key-value operations with streaming support
  • BLOB Storage - Binary large object storage with metadata
  • Secondary Indexes - Hash and B-tree indexes for value-based queries
  • Raw SQL Queries - Execute SQL directly on the underlying database
  • Tiny Size - Under 2MB binary with SQLite backend
  • Pure Rust - Safe, concurrent, and memory-efficient

🚀 Quick Start

Prerequisites

BuffDB requires protoc (Protocol Buffers compiler):

# Ubuntu/Debian
sudo apt-get install protobuf-compiler

# macOS
brew install protobuf

# Windows
choco install protoc

macOS Setup

macOS users need additional dependencies due to linking requirements:

# Install required dependencies
brew install protobuf sqlite libiconv

# Clone the repository
git clone https://github.com/buffdb/buffdb
cd buffdb

# The project includes a .cargo/config.toml that sets up the correct paths
# If you still encounter linking errors, you can manually set:
export LIBRARY_PATH="/opt/homebrew/lib:$LIBRARY_PATH"
export RUSTFLAGS="-L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib"

Building and Running

Option 1: Install from crates.io

cargo install buffdb
buffdb run

Option 2: Build from source

# Build with all features (includes all backends)
cargo build --all-features --release

# Run the server
./target/release/buffdb run

# Or run directly with cargo
cargo run --all-features -- run

Option 3: Quick development build

# For development with faster compilation
cargo build --features sqlite
cargo run --features sqlite -- run

Language Examples

<details> <summary><b>🦀 Rust</b></summary>
use buffdb::client::{blob::BlobClient, kv::KvClient};
use buffdb::proto::{blob, kv};
use buffdb::inference::{ModelInfo, ModelKeys};
use tonic::transport::Channel;
use futures::StreamExt;
use serde_json;
use chrono;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Connect to BuffDB
    let channel = Channel::from_static("http://[::1]:9313").connect().await?;
    let mut kv_client = KvClient::new(channel.clone());
    let mut blob_client = BlobClient::new(channel);
    
    // 1. Store ML model
    let model_info = ModelInfo {
        name: "llama2".to_string(),
        version: "7b-v1.0".to_string(),
        framework: "pytorch".to_string(),
        description: "LLaMA 2 7B base model".to_string(),
        input_shape: vec![1, 512], // batch_size, sequence_length
        output_shape: vec![1, 512, 32000], // batch_size, sequence_length, vocab_size
        blob_ids: vec![],
        created_at: chrono::Utc::now().to_rfc3339(),
        parameters: Default::default(),
    };
    
    // Store model weights (simulate with dummy data)
    let model_weights = vec![0u8; 1024 * 1024]; // 1MB dummy weights
    let store_request = blob::StoreRequest {
        bytes: model_weights,
        metadata: Some(serde_json::json!({
            "model": model_info.name,
            "version": model_info.version,
            "type": "weights"
        }).to_string()),
        transaction_id: None,
    };
    
    let mut blob_response = blob_client
        .store(tokio_stream::once(store_request))
        .await?
        .into_inner();
    
    let blob_id = blob_response.next().await.unwrap()?.id;
    
    // Store model metadata
    let mut model_info_with_blob = model_info.clone();
    model_info_with_blob.blob_ids = vec![blob_id];
    
    let metadata_key = ModelKeys::metadata_key(&model_info.name, &model_info.version);
    let set_request = kv::SetRequest {
        key: metadata_key,
        value: serde_json::to_string(&model_info_with_blob)?,
        transaction_id: None,
    };
    
    kv_client.set(tokio_stream::once(set_request)).await?;
    
    // 2. Load model for inference
    let get_request = kv::GetRequest {
        key: ModelKeys::metadata_key("llama2", "7b-v1.0"),
        transaction_id: None,
    };
    
    let mut response = kv_client
        .get(tokio_stream::once(get_request))
        .await?
        .into_inner();
    
    if let Some(result) = response.next().await {
        let model_info: ModelInfo = serde_json::from_str(&result?.value)?;
        println!("Loaded model: {} v{}", model_info.name, model_info.version);
        println!("Framework: {}", model_info.framework);
        println!("Parameters shape: {:?}", model_info.output_shape);
        
        // Load model weights
        for blob_id in &model_info.blob_ids {
            let get_request = blob::GetRequest {
                id: *blob_id,
                transaction_id: None,
            };
            
            let mut blob_response = blob_client
                .get(tokio_stream::once(get_request))
                .await?
                .into_inner();
            
            if let Some(result) = blob_response.next().await {
                let weights = result?.bytes;
                println!("Loaded model weights: {} bytes", weights.len());
                // Here you would load weights into your ML framework
            }
        }
    }
    
    Ok(())
}

Add to Cargo.toml:

[dependencies]
buffdb = "0.5"
tokio = { version = "1", features = ["full"] }
tonic = "0.12"
futures = "0.3"
serde_json = "1.0"
chrono = "0.4"
tokio-stream = "0.1"
</details> <details> <summary><b>🟦 TypeScript / Node.js</b></summary>
import * as grpc from '@grpc/grpc-js';
import * as protoLoader from '@grpc/proto-loader';

// Load proto definitions
const kvProto = protoLoader.loadSync('kv.proto');
const blobProto = protoLoader.loadSync('blob.proto');
const kvDef = grpc.loadPackageDefinition(kvProto).buffdb.kv;
const blobDef = grpc.loadPackageDefinition(blobProto).buffdb.blob;

// Connect to BuffDB
const kvClient = new kvDef.Kv('[::1]:9313', grpc.credentials.createInsecure());
const blobClient = new blobDef.Blob('[::1]:9313', grpc.credentials.createInsecure());

// Model metadata interface
interface ModelInfo {
  name: string;
  version: string;
  framework: string;
  description: string;
  input_shape: number[];
  output_shape: number[];
  blob_ids: number[];
  created_at: string;
  parameters: Record<string, string>;
}

// 1. Store ML model
async function storeModel() {
  const modelInfo: ModelInfo = {
    name: 'bert-base',
    version: 'uncased-v1',
    framework: 'tensorflow',
    description: 'BERT base uncased model',
    input_shape: [1, 512], // batch_size, sequence_length
    output_shape: [1, 512, 768], // batch_size, sequence_length, hidden_size
    blob_ids: [],
    created_at: new Date().toISOString(),
    parameters: { 'attention_heads': '12', 'hidden_layers': '12' }
  };

  // Store model weights (simulate with dummy data)
  const modelWeights = Buffer.alloc(1024 * 1024); // 1MB dummy weights
  
  // Store weights as blob
  const blobStream = blobClient.Store();
  const blobId = await new Promise<number>((resolve, reject) => {
    blobStream.on('data', (response) => resolve(response.id));
    blobStream.on('error', reject);
    
    blobStream.write({
      bytes: modelWeights,
      metadata: JSON.stringify({
        model: modelInfo.name,
        version: modelInfo.version,
        type: 'weights'
      })
    });
    blobStream.end();
  });

  // Update model info with blob ID
  modelInfo.blob_ids = [blobId];

  // Store model metadata
  const kvStream = kvClient.Set();
  await new Promise<void>((resolve, reject) => {
    kvStream.on('end', resolve);
    kvStream.on('error', reject);
    
    kvStream.write({
      key: `model:${modelInfo.name}:${modelInfo.version}:metadata`,
      value: JSON.stringify(modelInfo)
    });
    kvStream.end();
  });

  console.log(`Stored model ${modelInfo.name} v${modelInfo.version}`);
  return modelInfo;
}

// 2. Load model for inference
async function loadModel(name: string, version: string): Promise<void> {
  // Get model metadata
  const kvStream = kvClient.Get();
  const modelInfo = await new Promise<ModelInfo>((resolve, reject) => {
    kvStream.on('data', (response) => {
      resolve(JSON.parse(response.value) as ModelInfo);
    });
    kvStream.on('error', reject);
    
    kvStream.write({ key: `model:${name}:${version}:metadata` });
    kvStream.end();
  });

  console.log(`Loaded model: ${modelInfo.name} v${modelInfo.version}`);
  console.log(`Framework: ${modelInfo.framework}`);
  console.log(`Output shape: ${modelInfo.output_shape}`);

  // Load model weights
  for (const blobId of modelInfo.blob_ids) {
    const blobStream = blobClient.Get();
    const weights = await new Promise<Buffer>((resolve, reject) => {
      const chunks: Buffer[] = [];
      
      blobStream.on('data', (response) => {
        chunks.push(response.bytes);
      });
      blobStream.on('end', () => {
        resolve(Buffer.concat(chunks));
      });
      blobStream.on('error', reject);
      
      blobStream.write({ id: blobId });
      blobStream.end();
    });

    console.log(`Loaded model weights: ${weights.length} bytes`);
    // Here you would load weights into your ML framework (e.g., TensorFlow.js)
  }
}

/
View on GitHub
GitHub Stars300
CategoryData
Updated21h ago
Forks5

Languages

Rust

Security Score

100/100

Audited on Mar 28, 2026

No findings