S4core
๐3x faster than MinIO and RustFS. S4Core is an open-source, Rust-based S3-compatible server. Say goodbye to inode exhaustion and hello to atomic operations and smart deduplication
Install / Use
/learn @s4core/S4coreREADME
๐ซ S4 - Modern S3-Compatible Object Storage
S4 is a high-performance, S3-compatible object storage server written in Rust. It solves the inode exhaustion problem common with traditional file-based storage systems and provides advanced features like atomic directory operations and content-addressable deduplication.

Demo Console: s4console ยท Login:
root/password12345ยท Resets every 10 min
Demo API: s4core ยท Access Key ID / Secret Access Key:
my-secret-key_id/my-secret-access-keyยท Resets every 10 min
Features
- S3 API Compatible: Full compatibility with AWS S3 API (AWS CLI, boto3, etc.)
- Inode Problem Solved: Append-only log storage eliminates inode exhaustion
- Content Deduplication: Automatic deduplication saves 30-50% storage space
- Object Versioning: S3-compatible versioning with delete markers
- Lifecycle Policies: Automatic object expiration and cleanup of old versions
- Atomic Operations: Rename directories with millions of files in milliseconds
- Strict Consistency: Data is guaranteed to be written before returning success
- IAM & Admin API: Role-based access control (Reader, Writer, SuperUser) with JWT authentication
- S3 Select SQL: Query CSV/JSON/Parquet objects with full SQL (powered by Apache DataFusion)
- Multi-Object SQL: Extended S3 Select with glob patterns for querying across multiple objects
- High Performance: Optimized for single-node performance
Architecture
S4 uses a Bitcask-style storage approach:
- All objects: Stored in append-only volume files (~1GB each)
- Metadata: Stored in fjall (LSM-tree, MVCC, LZ4 compression) with separate keyspaces
This approach ensures:
- Minimal inode usage (1 billion objects = ~1000 files)
- Maximum write performance (sequential writes)
- Atomic metadata operations (fjall cross-keyspace batches)
- Fast recovery (metadata in ACID database + crash-safe journal)
Quick Start
Prerequisites
- Rust 1.70 or later
- Linux (recommended) or macOS
Building from Source
# Clone the repository
git clone https://github.com/org/s4.git
cd s4
# Build the project
cargo build --release
# Run the server
./target/release/s4-server
Docker
S4 provides official Docker images for easy deployment.
Using docker run
# Run S4 server (basic)
docker run -d \
--name s4core \
-p 9000:9000 \
-v s4-data:/data \
-e S4_BIND=0.0.0.0:9000 \
s4core/s4core:latest
# Run with custom credentials
docker run -d \
--name s4core \
-p 9000:9000 \
-v s4-data:/data \
-e S4_BIND=0.0.0.0:9000 \
-e S4_ACCESS_KEY_ID=myaccesskey \
-e S4_SECRET_ACCESS_KEY=mysecretkey \
s4core/s4core:latest
# Run with IAM enabled
docker run -d \
--name s4core \
-p 9000:9000 \
-v s4-data:/data \
-e S4_BIND=0.0.0.0:9000 \
-e S4_ROOT_PASSWORD=password12345 \
s4core/s4core:latest
# Build the image locally
docker build -t s4-server .
Using Docker Compose
The project includes a docker-compose.yml that runs S4 server together with the web admin console.
# Run full stack (server + web console)
docker compose up --build
# Run in background
docker compose up -d --build
# Run only the server
docker compose up s4-server --build
# With custom environment variables
S4_ROOT_PASSWORD=password12345 docker compose up --build
After startup:
- S4 API: http://localhost:9000
- Web Console: http://localhost:3000 (login with root credentials)
docker-compose.yml overview:
services:
s4core:
build: .
ports:
- "9000:9000"
volumes:
- s4-data:/data
environment:
- S4_BIND=0.0.0.0:9000
- S4_ROOT_PASSWORD=${S4_ROOT_PASSWORD:-}
- S4_ACCESS_KEY_ID=${S4_ACCESS_KEY_ID:-}
- S4_SECRET_ACCESS_KEY=${S4_SECRET_ACCESS_KEY:-}
s4-console:
image: s4core/s4console:latest
ports:
- "3000:3000"
environment:
- S4_BACKEND_URL=http://s4-server:9000
depends_on:
- s4core
For web console-only development, see frontend/README.md.
Environment Variables
S4 is configured through environment variables:
| Variable | Description | Default | Example |
|----------|-------------|---------|---------|
| S4_BIND | S4 BIND host | 127.0.0.1:9000 | 0.0.0.0:9000 |
| S4_ROOT_USERNAME | Root admin username | root | admin |
| S4_ROOT_PASSWORD | Root admin password (enables IAM) | None (IAM disabled) | password12345 |
| S4_JWT_SECRET | Secret key for signing JWT tokens | Auto-generated at startup (dev mode only) | 256-bit-crypto-random-string-like-this-1234567890ABCDEF |
| S4_ACCESS_KEY_ID | Access key for S3 authentication | Auto-generated dev key | myaccesskey |
| S4_SECRET_ACCESS_KEY | Secret key for S3 authentication | Auto-generated dev key | mysecretkey |
| S4_DATA_DIR | Base directory for storage | System temp dir | /var/lib/s4 |
| S4_MAX_UPLOAD_SIZE | Maximum upload size per request | 5GB | 10GB, 100MB, 1024KB |
| S4_TLS_CERT | Path to TLS certificate (PEM format) | None (HTTP mode) | /etc/ssl/certs/s4.pem |
| S4_TLS_KEY | Path to TLS private key (PEM format) | None (HTTP mode) | /etc/ssl/private/s4-key.pem |
| S4_LIFECYCLE_ENABLED | Enable lifecycle policy worker | true | true, false, 1, 0 |
| S4_LIFECYCLE_INTERVAL_HOURS | Lifecycle evaluation interval (hours) | 24 | 1, 6, 24, 168 |
| S4_LIFECYCLE_DRY_RUN | Dry-run mode (log without deleting) | false | true, false, 1, 0 |
| S4_COMPACTION_ENABLED | Enable volume compaction worker | true | true, false, 1, 0 |
| S4_COMPACTION_INTERVAL_HOURS | Compaction check interval (hours) | 6 | 1, 6, 12, 24 |
| S4_COMPACTION_THRESHOLD | Min fragmentation ratio to compact | 0.3 | 0.1โ0.9 |
| S4_COMPACTION_DRY_RUN | Analyze without compacting | false | true, false, 1, 0 |
| S4_MULTIPART_UPLOAD_TTL_HOURS | TTL for abandoned multipart uploads (hours) | 24 | 1, 48 |
| S4_COMPACTION_MULTIPART_TTL_SECS | Dev/testing only. Overrides multipart TTL for compactor in seconds | None | 1, 60 |
| S4_METRICS_ENABLED | Prometheus metrics | true) | false |
| S4_SELECT_ENABLED | Enable/disable S3 Select SQL engine | true | false |
| S4_SELECT_MAX_MEMORY | Per-query memory limit for SQL engine | 256MB | 512MB, 1GB |
| S4_SELECT_TIMEOUT | SQL query timeout (seconds) | 60 | 120 |
Size format: Supports GB/G, MB/M, KB/K, or bytes (no suffix).
Example (HTTP):
export S4_ACCESS_KEY_ID=myaccesskey
export S4_SECRET_ACCESS_KEY=mysecretkey
export S4_DATA_DIR=/var/lib/s4
export S4_MAX_UPLOAD_SIZE=10GB
./target/release/s4-server
Using with AWS CLI
Configure AWS CLI to use S4:
aws configure set aws_access_key_id myaccesskey
aws configure set aws_secret_access_key mysecretkey
Basic operations:
# Create a bucket
aws --endpoint-url http://localhost:9000 s3 mb s3://mybucket
# Upload a file
aws --endpoint-url http://localhost:9000 s3 cp file.txt s3://mybucket/file.txt
# List objects
aws --endpoint-url http://localhost:9000 s3 ls s3://mybucket
# Download a file
aws --endpoint-url http://localhost:9000 s3 cp s3://mybucket/file.txt downloaded.txt
# Delete a file
aws --endpoint-url http://localhost:9000 s3 rm s3://mybucket/file.txt
# Delete a bucket
aws --endpoint-url http://localhost:9000 s3 rb s3://mybucket
Versioning
S4 supports S3-compatible object versioning to preserve, retrieve, and restore every version of every object.
# Enable versioning on bucket
aws s3api put-bucket-versioning \
--bucket mybucket \
--versioning-configuration Status=Enabled \
--endpoint-url https://127.0.0.1:9000 \
--no-verify-ssl
# Upload file (version 1)
echo "version 1" | aws s3api put-object \
--bucket mybucket \
--key file.txt \
--body - \
--endpoint-url https://127.0.0.1:9000 \
--no-verify-ssl
# Upload again (version 2)
echo "version 2" | aws s3api put-object \
--bucket mybucket \
--key file.txt \
--body - \
--endpoint-url https://127.0.0.1:9000 \
--no-verify-ssl
# List all versions
aws s3api list-object-versions \
--bucket mybucket \
--prefix file.txt \
--endpoint-url https://127.0.0.1:9000 \
--no-verify-ssl
# Get specific version
aws s3api get-object \
--bucket mybucket \
--key file.txt \
--version-id "ff495d34-c292-4af4-9d10-e186272010ed" \
first_version.txt \
--endpoint-url https://127.0.0.1:9000 \
--no-verify-ssl
# Delete object (creates delete marker)
aws s3api delete-object \
--bucket mybucket \
--key file.txt \
--endpoint-url https://127.0.0.1:9000 \
--no-verify-ssl
Lifecycle Policies
S4 supports automatic object expiration and cleanup based on lifecycle rules.
# Create lifecycle configuration file
cat > lifecycle.json <<'EOF'
{
"Rules": [
{
"ID": "expire-logs",
"Status": "Enabled",
"Filter": {
"Prefix": "logs/"
},
"Expiration": {
"Days": 30
}
},
{
"ID": "cleanup-old-versions",
"Status": "Enabled",
"Filter": {
"Prefix": ""
},
"NoncurrentVersionExpiration": {
"NoncurrentDays": 90
}
}
]
}
EOF
# Set lifecycle configuration
aws s3api put-bucket-lifecycle-configuration \
--bucket mybucket \
--lifecycle-configuration file://lifecycle.json \
--endpoint-url https://127.0.0.1:9000 \
--no-verify-ssl
# Get lifecycle configuration
aws s3api get-bucket-lifecycle-configuration \
--bucket mybucket \
--end
Related Skills
node-connect
341.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
341.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.6kCommit, push, and open a PR
