S3mini
πΆ Tiny S3 client. Edge computing ready. No-dep. In Typescript. Works with @cloudflare @minio @backblaze @digitalocean @garagehq @oracle
Install / Use
/learn @good-lly/S3miniREADME
s3mini | Tiny & fast S3 client for node and edge platforms.
s3mini is an ultra-lightweight Typescript client (~20 KB minified, β15 % more ops/s) for S3-compatible object storage. It runs on Node, Bun, Cloudflare Workers, and other edge platforms. It has been tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, Ceph, Oracle, Garage and MinIO. (No Browser support!)
Features
- π Light and fast: averages β15 % more ops/s and only ~20 KB (minified, not gzipped).
- π§ Zero dependencies; supports AWS SigV4, pre-signed URLs, and SSE-C headers (tested on Cloudflare)
- π Works on Cloudflare Workers; ideal for edge computing, Node, and Bun (no browser support).
- π Only the essential S3 APIsβimproved list, put, get, delete, and a few more.
- π οΈ Supports multipart uploads.
- π Tree-shakeable ES module.
- π― TypeScript support with type definitions.
- π Documented with examples, tests and widely tested on various S3-compatible services! (Contributions welcome!)
- π¦ BYOS3 β Bring your own S3-compatible bucket (tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, MinIO, Garage, Micro/Ceph and Oracle Object Storage, Scaleway).
Tested On
and more ...
Contributions welcome!
Dev:
<a href="https://github.com/good-lly/s3mini/issues/"> <img src="https://img.shields.io/badge/contributions-welcome-brightgreen.svg" alt="Contributions welcome" /></a>
Table of Contents
- Installation
- Quick Start
- Configuration
- Uploading Objects
- Downloading Objects
- Listing Objects
- Deleting Objects
- Copy and Move
- Conditional Requests
- Pre-signed URLs
- Server-Side Encryption (SSE-C)
- API Reference
- Error Handling
- Cloudflare Workers
- Supported Operations
- Security Notes
- π Contributions welcomed!
- License
Installation
npm install s3mini
yarn add s3mini
pnpm add s3mini
Environment Variables
To use s3mini, you need to set up your environment variables for provider credentials and S3 endpoint. Create a .env file in your project root directory. Checkout the example.env file for reference.
# On Windows, Mac, or Linux
mv example.env .env
β οΈ Environment Support Notice
This library is designed to run in environments like Node.js, Bun, and Cloudflare Workers. It does not support browser environments due to the use of Node.js APIs and polyfills.
Quick Start
import { S3mini } from 's3mini';
const s3 = new S3mini({
accessKeyId: process.env.S3_ACCESS_KEY,
secretAccessKey: process.env.S3_SECRET_KEY,
endpoint: 'https://bucket.region.r2.cloudflarestorage.com',
region: 'auto',
});
// Upload (auto-selects single PUT or multipart based on size)
await s3.putAnyObject('photos/vacation.jpg', fileBuffer, 'image/jpeg');
// Download
const data = await s3.getObject('photos/vacation.jpg');
// List
const objects = await s3.listObjects('/', 'photos/');
// Delete
await s3.deleteObject('photos/vacation.jpg');
Configuration
const s3 = new S3mini({
// Required
accessKeyId: string,
secretAccessKey: string,
endpoint: string, // Full URL: https://bucket.region.provider.com
// Optional
region: string, // Default: 'auto'
minPartSize: number, // Default: 8MB β threshold for multipart
requestSizeInBytes: number, // Default: 8MB β chunk size for range requests
requestAbortTimeout: number, // Timeout in ms (undefined = no timeout)
logger: Logger, // Custom logger with info/warn/error methods
fetch: typeof fetch, // Custom fetch implementation
});
Endpoint formats:
// Path-style (bucket in path)
'https://s3.us-east-1.amazonaws.com/my-bucket';
// Virtual-hosted-style (bucket in subdomain)
'https://my-bucket.s3.us-east-1.amazonaws.com';
// Provider-specific
'https://my-bucket.nyc3.digitaloceanspaces.com';
'https://account-id.r2.cloudflarestorage.com/my-bucket';
Uploading Objects
putObject β Simple Upload
Direct single-request upload. Use for small files or when you need fine control.
const response = await s3.putObject(
key: string, // Object key/path
data: string | Buffer | Uint8Array | Blob | File | ReadableStream,
contentType?: string, // Default: 'application/octet-stream'
ssecHeaders?: SSECHeaders, // Optional encryption headers
additionalHeaders?: AWSHeaders, // Optional x-amz-* headers
contentLength?: number, // Optional, auto-detected for most types
);
// Returns: Response object
const etag = response.headers.get('etag');
Examples:
// String content
await s3.putObject('config.json', JSON.stringify({ key: 'value' }), 'application/json');
// Buffer/Uint8Array
const buffer = await fs.readFile('image.png');
await s3.putObject('images/photo.png', buffer, 'image/png');
// Blob (browser File API or Node 18+)
const blob = new Blob(['Hello'], { type: 'text/plain' });
await s3.putObject('hello.txt', blob, 'text/plain');
// With custom headers
await s3.putObject('data.bin', buffer, 'application/octet-stream', undefined, {
'x-amz-meta-author': 'john',
'x-amz-meta-version': '1.0',
});
putAnyObject β Smart Upload (Recommended)
Automatically chooses single PUT or multipart based on data size. This is the recommended method for most use cases.
const response = await s3.putAnyObject(
key: string,
data: string | Buffer | Uint8Array | Blob | File | ReadableStream,
contentType?: string,
ssecHeaders?: SSECHeaders,
additionalHeaders?: AWSHeaders,
contentLength?: number,
);
Behavior:
- β€ minPartSize (8MB default): Single PUT request
- > minPartSize: Automatic multipart upload with:
- Parallel part uploads (4 concurrent by default)
- Automatic retries with exponential backoff (3 retries)
- Proper cleanup on failure (aborts incomplete uploads)
Examples:
// Small file β uses single PUT internally
await s3.putAnyObject('small.txt', 'Hello World');
// Large file β automatically uses multipart
const largeBuffer = await fs.readFile('video.mp4'); // 500MB
await s3.putAnyObject('videos/movie.mp4', largeBuffer, 'video/mp4');
// Blob (zero-copy slicing for memory efficiency)
const file = new File([largeArrayBuffer], 'data.bin');
await s3.putAnyObject('uploads/data.bin', file);
// ReadableStream (uploads as data arrives)
const stream = fs.createReadStream('huge-file.dat');
await s3.putAnyObject('backups/data.dat', Readable.toWeb(stream));
Memory efficiency with Blobs:
For large files, using Blob or File is more memory-efficient than Uint8Array:
// β Loads entire file into memory
const buffer = await fs.readFile('large-video.mp4');
await s3.putAnyObject('video.mp4', buffer);
// β
Zero-copy slicing β only reads data when uploading each part
const file = Bun.file('large-video.mp4'); // Bun
// or
const blob = new Blob([await fs.readFile('large-video.mp4')]); // Node
await s3.putAnyObject('video.mp4', file);
Manual Multipart Upload
For advanced control over multipart uploads (progress tracking, resumable uploads, custom concurrency).
// 1. Initialize upload
const uploadId = await s3.getMultipartUploadId(
key: string,
contentType?: string,
ssecHeaders?: SSECHeaders,
additional
