Verifyfetch
Resumable, verified downloads for large browser files. Fail at 3.8GB, resume from 3.8GB.
Install / Use
/learn @hamzaydia/VerifyfetchREADME
npm install verifyfetch
import { verifyFetch } from 'verifyfetch';
const response = await verifyFetch('/model.bin', {
sri: 'sha256-uU0nuZNNPgilLlLX2n2r+sSE7+N6U4DukIj3rOLvzek='
});
That's it. If the hash doesn't match, it throws. Your users are protected.
Why VerifyFetch?
The Problem
Loading large files in the browser is painful:
- Memory explosion -
crypto.subtle.digest()buffers the entire file. 4GB AI model = 4GB+ RAM = browser crash. - No fail-fast - Download 4GB, find corruption at the end, start over.
- CDN compromises - polyfill.io affected 100K+ sites.
The Solution
| Feature | Native fetch | VerifyFetch |
|---------|---------------|-------------|
| Basic SRI verification | Yes | Yes |
| Constant memory | No (buffers all) | Yes (streaming WASM) |
| Fail-fast on corruption | No | Yes (chunked verification) |
| Progress callbacks | No | Yes |
| Multi-CDN failover | No | Yes |
| Service Worker mode | No | Yes |
Quick Start
Option 1: Direct Usage
import { verifyFetch } from 'verifyfetch';
const response = await verifyFetch('/engine.wasm', {
sri: 'sha256-uU0nuZNNPgilLlLX2n2r+sSE7+N6U4DukIj3rOLvzek='
});
Option 2: Service Worker Mode (Zero-Code)
Add verification to every fetch without changing your app code:
// sw.js (your Service Worker)
import { createVerifyWorker } from 'verifyfetch/worker';
createVerifyWorker({
manifestUrl: '/vf.manifest.json',
include: ['*.wasm', '*.bin', '*.onnx', '*.safetensors'],
onFail: 'block'
});
// app.js - No changes needed!
const model = await fetch('/model.bin'); // Automatically verified!
Option 3: Manifest Mode
import { createVerifyFetcher } from 'verifyfetch';
const vf = await createVerifyFetcher({
manifestUrl: '/vf.manifest.json'
});
const wasm = await vf.arrayBuffer('/engine.wasm'); // Hash looked up automatically
For AI Models (WebLLM, Transformers.js, ONNX)
Loading multi-GB models in the browser? This is what VerifyFetch was built for.
The pain:
- Download 4GB model, network drops at 3.8GB, start over
- Native
crypto.subtleneeds 4GB RAM just to verify a 4GB file - No way to detect corruption until after downloading everything
The fix:
import { verifyFetchResumable } from 'verifyfetch';
const model = await verifyFetchResumable('/phi-3-mini.gguf', {
chunked: manifest.artifacts['/phi-3-mini.gguf'].chunked,
persist: true, // Survives page reload
onProgress: ({ percent, resumed }) => {
console.log(`${percent}%${resumed ? ' (resumed)' : ''}`);
}
});
- Memory: 2MB constant, not 4GB
- Resume: Network fails at 80%? Resume from 80%
- Fail-fast: Detect corruption immediately, not after downloading everything
WebLLM is considering native integrity support (#761). VerifyFetch works today.
Generate Hashes
# Generate SHA-256 hashes
npx verifyfetch sign ./public/*.wasm ./models/*.bin
# With chunked verification (for large files - enables fail-fast)
npx verifyfetch sign --chunked --chunk-size 1048576 ./large-model.bin
# Output: vf.manifest.json
Features
Streaming Verification
For large files, process chunks as they download with constant memory:
import { verifyFetchStream } from 'verifyfetch';
const { stream, verified } = await verifyFetchStream('/model.bin', {
sri: 'sha256-...'
});
// Process chunks immediately - constant memory usage
for await (const chunk of stream) {
await uploadToGPU(chunk);
}
// Verification completes after stream ends
await verified; // Throws IntegrityError if hash doesn't match
Resumable Downloads (NEW in v1.0)
The killer feature: Download fails at 3.8GB of 4GB? Resume from 3.8GB, not zero.
import { verifyFetchResumable } from 'verifyfetch';
// First attempt - fails at 80%
const result = await verifyFetchResumable('/model.safetensors', {
chunked: manifest.artifacts['/model.safetensors'].chunked,
onProgress: ({ chunksVerified, totalChunks, resumed }) => {
console.log(`${chunksVerified}/${totalChunks} chunks${resumed ? ' (resumed)' : ''}`);
}
});
// Page reload or network failure...
// Second attempt - automatically resumes from last verified chunk
const result2 = await verifyFetchResumable('/model.safetensors', {
chunked: manifest.artifacts['/model.safetensors'].chunked,
onResume: (state) => {
console.log(`Resuming from chunk ${state.verifiedChunks}/${totalChunks}`);
}
});
How it works:
- Each chunk is verified and stored in IndexedDB as it downloads
- On failure/reload, loads existing verified chunks from storage
- Uses HTTP Range requests to fetch only remaining chunks
- Clean up storage automatically on completion
Chunked Verification (Fail-Fast)
Stop downloading immediately if corruption is detected:
import { createChunkedVerifier, verifyFetchStream } from 'verifyfetch';
// Generate manifest with chunked hashes
// npx verifyfetch sign --chunked ./large-model.bin
// Verify chunk-by-chunk as data arrives
const verifier = createChunkedVerifier(manifest.artifacts['/model.bin'].chunked);
const { stream } = await verifyFetchStream('/model.bin', { sri: chunked.root });
for await (const chunk of stream) {
const result = await verifier.verifyNextChunk(chunk);
if (!result.valid) {
// Don't download 4GB if byte 0 is wrong!
throw new Error(`Chunk ${result.index} corrupt - stopping immediately`);
}
await processChunk(chunk);
}
How it works: Each chunk is hashed independently. If chunk 5 of 4000 is corrupt, you find out immediately - not after downloading the other 3995 chunks.
Multi-CDN Failover
Automatically try backup sources if one fails:
import { verifyFetchFromSources } from 'verifyfetch';
const response = await verifyFetchFromSources(
'sha256-abc123...',
'/model.bin',
{
sources: [
'https://cdn1.example.com',
'https://cdn2.example.com',
'https://backup.example.com'
],
strategy: 'race' // 'sequential' | 'race' | 'fastest'
}
);
Progress Tracking
await verifyFetch('/large-model.bin', {
sri: 'sha256-...',
onProgress: (bytes, total) => {
const percent = total ? Math.round(bytes / total * 100) : 0;
console.log(`Downloading: ${percent}%`);
}
});
Fallback URLs
await verifyFetch('/main.wasm', {
sri: 'sha256-...',
onFail: { fallbackUrl: '/backup.wasm' }
});
CLI Commands
# Generate SRI hashes
npx verifyfetch sign <files...>
# Generate with chunked hashes (for large files)
npx verifyfetch sign --chunked --chunk-size 1048576 <files...>
# Verify files match manifest (for CI)
npx verifyfetch enforce --manifest ./vf.manifest.json
# Add to Next.js project
npx verifyfetch init --next
API Reference
verifyFetch(url, options)
Basic verified fetch.
const response = await verifyFetch('/file.bin', {
sri: 'sha256-...', // Required: SRI hash
onFail: 'block', // 'block' | 'warn' | { fallbackUrl }
onProgress: (bytes, total) => {},
fetchImpl: fetch // Custom fetch implementation
});
verifyFetchStream(url, options)
Streaming verification with constant memory.
const { stream, verified, totalBytes } = await verifyFetchStream('/file.bin', {
sri: 'sha256-...',
onProgress: (bytes, total) => {}
});
for await (const chunk of stream) {
// Process immediately
}
await verified; // Throws if verification fails
verifyFetchFromSources(sri, path, options)
Multi-CDN failover.
const response = await verifyFetchFromSources(
'sha256-...',
'/file.bin',
{
sources: ['https://cdn1.com', 'https://cdn2.com'],
strategy: 'sequential', // 'sequential' | 'race' | 'fastest'
timeout: 30000,
onSourceError: (source, error) => {}
}
);
createVerifyFetcher(options)
Manifest-aware fetcher.
const vf = await createVerifyFetcher({
manifestUrl: '/vf.manifest.json',
baseUrl: 'https://cdn.example.com' // Optional
});
await vf.arrayBuffer('/file.wasm');
await vf.json('/config.json');
await vf.text('/data.txt');
createVerifyWorker(options) (Service Worker)
Zero-code verification via Service Worker.
// In sw.js
import { createVerifyWorker } from 'verifyfetch/worker';
createVerifyWorker({
manifestUrl: '/vf.manifest.json',
include: ['*.wasm', '*.bin', '*.onnx'],
exclude: ['*.json'],
onFail: 'b
