SkillAgentSearch skills...

Hypercore

Hypercore is a secure, distributed append-only log.

Install / Use

/learn @holepunchto/Hypercore
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Hypercore

See the full API docs at docs.pears.com

Hypercore is a secure, distributed append-only log.

Built for sharing large datasets and streams of real time data

Features

  • Sparse replication. Only download the data you are interested in.
  • Realtime. Get the latest updates to the log fast and securely.
  • Performant. Uses a simple flat file structure to maximize I/O performance.
  • Secure. Uses signed merkle trees to verify log integrity in real time.
  • Modular. Hypercore aims to do one thing and one thing well - distributing a stream of data.

Note that the latest release is Hypercore 10, which adds support for truncate and many other things. Version 10 is not compatible with earlier versions (9 and earlier), but is considered LTS, meaning the storage format and wire protocol is forward compatible with future versions.

Install

npm install hypercore

API

const core = new Hypercore(storage, [key], [options])

Make a new Hypercore instance.

storage should be set to a directory where you want to store the data and core metadata.

const core = new Hypercore('./directory') // store data in ./directory

Alternatively you can pass a Hypercore Storage or use a Corestore if you want to make many Hypercores efficiently. Note that random-access-storage is no longer supported.

key can be set to a Hypercore key which is a hash of Hypercore's internal auth manifest, describing how to validate the Hypercore. If you do not set this, it will be loaded from storage. If nothing is previously stored, a new auth manifest will be generated giving you local write access to it.

options include:

{
  createIfMissing: true, // create a new Hypercore key pair if none was present in storage
  overwrite: false, // overwrite any old Hypercore that might already exist
  force: false, // Advanced option. Will force overwrite even if the header's key & the passed key don't match
  valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to binary
  encodeBatch: batch => { ... }, // optionally apply an encoding to complete batches
  keyPair: kp, // optionally pass the public key and secret key as a key pair
  encryption: { key: buffer }, // the block encryption key
  onwait: () => {}, // hook that is called if gets are waiting for download
  timeout: 0, // wait at max some milliseconds (0 means no timeout)
  writable: true, // disable appends and truncates
  inflightRange: null, // Advanced option. Set to [minInflight, maxInflight] to change the min and max inflight blocks per peer when downloading.
  ongc: (session) => { ... }, // A callback called when the session is garbage collected
  onseq: (index, core) => { ... }, // A callback called when core.get(index) is called.
  notDownloadingLinger: 20000, // How many milliseconds to wait after downloading finishes keeping the connection open. Defaults to a random number between 20-40s
  allowFork: true, // Enables updating core when it forks
  userData: { foo: 'bar' }, // An object to assign to the local User Data storage
  manifest: undefined, // Advanced option. Set the manifest when creating the hypercore. See Manifest section for more info
  preload: undefined, // Advanced option. A promise that returns constructor options overrides before the core is opened
  storage: undefined, // An alternative to passing storage as a dedicated argument
  key: null, // An alternative to passing key as a dedicated argument
}

You can also set valueEncoding to any compact-encoding instance.

valueEncodings will be applied to individual blocks, even if you append batches. If you want to control encoding at the batch-level, you can use the encodeBatch option, which is a function that takes a batch and returns a binary-encoded batch. If you provide a custom valueEncoding, it will not be applied prior to encodeBatch.

The user may provide a custom encryption module as opts.encryption, which should satisfy the HypercoreEncryption interface.

User Data

User Data is a key-value store that is persisted locally and is not replicated with the Hypercore. This is useful as a quick store for data only used by the current peer. For example, autobase uses User Data to store information such as encryption keys and connections between a peer's local writer and the Autobase's bootstrap core.

Keys are always strings and values can be strings or buffers.

See core.setUserData(key, value) and core.getUserData(key) for updating User Data.

Manifest

The manifest is metadata about authenticating a hypercore including things like the signers (only one by default) and the prologue. Manifest has the following structure:

{
  version: 1,                       // Version of the manifest format
  hash: 'blake2b',                  // Only Blake2b is supported currently
  allowPatch: false,                // Whether the hypercore can be "patched" to change the signers
  quorum: (signers.length / 2) + 1, // How many signers needed to verify a block
  signers,                          // Array of signers for the core
  prologue: null,                   // The tree hash and length of the core
  linked: null,                     // Array of associated core keys. Only supported in versions >= 2
  userData: null                    // Arbitrary buffer for User Data integral to the core. Only supported in versions >= 2
}

The linked property in the manifest is used to reference other hypercores that are associated with the current core. For example in autobase the encryption view is loaded from the linked property in the system view core. Note, as with everything in the manifest, changing the linked property changes the core's key.

Signers are an array of objects with the following structure:

{
  signature: 'ed25519',               // The signature method
  namespace: caps.DEFAULT_NAMESPACE,  // A cryptographic namespace for the signature
  publicKey: Buffer                   // Signer's public key
}

const { length, byteLength } = await core.append(block, options = {})

Append a block of data (or an array of blocks) to the core. Returns the new length and byte length of the core.

// simple call append with a new block of data
await core.append(Buffer.from('I am a block of data'))

// pass an array to append multiple blocks as a batch
await core.append([Buffer.from('batch block 1'), Buffer.from('batch block 2')])

options include:

{
  writable: false // Enabled ignores writable check. Does not override whether core is writable.
  maxLength: undefined // The maximum resulting length of the core after appending
  keyPair: core.keyPair // KeyPair used to sign the block(s)
  signature: null // Set signature for block(s)
}

const block = await core.get(index, [options])

Get a block of data. If the data is not available locally this method will prioritize and wait for the data to be downloaded.

// get block #42
const block = await core.get(42)

// get block #43, but only wait 5s
const blockIfFast = await core.get(43, { timeout: 5000 })

// get block #44, but only if we have it locally
const blockLocal = await core.get(44, { wait: false })

options include:

{
  wait: true, // wait for block to be downloaded
  onwait: () => {}, // hook that is called if the get is waiting for download
  timeout: 0, // wait at max some milliseconds (0 means no timeout)
  activeRequests: undefined, // Advanced option. Pass BlockRequest for replicating the block
  valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to the core's valueEncoding
  decrypt: true, // automatically decrypts the block if encrypted
  raw: false, // Return block without decoding
}

const has = await core.has(start, [end])

Check if the core has all blocks between start and end.

const updated = await core.update([options])

Waits for initial proof of the new core length until all findingPeers calls have finished.

const updated = await core.update()

console.log('core was updated?', updated, 'length is', core.length)

options include:

{
  wait: false,
  activeRequests: undefined, // Advanced option. Pass requests for replicating blocks
  force: false, // Force an update even if core is writable.
}

Use core.findingPeers() or { wait: true } to make await core.update() blocking.

const [index, relativeOffset] = await core.seek(byteOffset, [options])

Seek to a byte offset.

Returns [index, relativeOffset], where index is the data block the byteOffset is contained in and relativeOffset is the relative byte offset in the data block.

await core.append([Buffer.from('abc'), Buffer.from('d'), Buffer.from('efg')])

const first = await core.seek(1) // returns [0, 1]
const second = await core.seek(3) // returns [1, 0]
const third = await core.seek(5) // returns [2, 1]
{
  wait: true, // wait for data to be downloaded
  timeout: 0, // wait at max some milliseconds (0 means no timeout)
  activeRequests: undefined // Advanced option. Pass requests for replicating blocks
}

const stream = core.createReadStream([options])

Make a read stream to read a range of data out at once.

// read the full core
const fullStream = core.createReadStream()

// read from block 10-14
const partialStream = core.createReadStream({ start: 10, end: 15 })

// pipe the stream somewhere using the .pipe method on Node.js or consume it as
// an async iterator

for await (const data of fullStream) {
  console.log('data:', data)
}

options include:

{
  start: 0,
  end: core.length,
  wait: cor
View on GitHub
GitHub Stars2.8k
CategoryDevelopment
Updated12h ago
Forks194

Languages

JavaScript

Security Score

95/100

Audited on Mar 22, 2026

No findings