SkillAgentSearch skills...

Hdf5

Modern Pure Go implementation of the HDF5 file format

Install / Use

/learn @scigolib/Hdf5

README

HDF5 Go Library

Pure Go implementation of the HDF5 file format - No CGo required

Release Go Version Go Report Card GoDoc CI codecov License Stars Discussions

A modern, pure Go library for reading and writing HDF5 files without CGo dependencies. HDF5 2.0.0 compatible, production-ready read/write support.


✨ Features

  • Pure Go - No CGo, no C dependencies, cross-platform
  • Modern Design - Built with Go 1.25+ best practices
  • HDF5 2.0.0 Compatibility - Read/Write: v0, v2, v3 superblocks | Format Spec v4.0 with checksum validation
  • Full Dataset Reading - Compact, contiguous, chunked layouts with GZIP
  • Rich Datatypes - Integers, floats, strings (fixed/variable), compounds
  • Memory Efficient - Buffer pooling and smart memory management
  • Production Ready - Read support feature-complete
  • ✍️ Comprehensive Write Support - Datasets, groups, attributes + Smart Rebalancing!

🚀 Quick Start

Installation

go get github.com/scigolib/hdf5

Basic Usage

package main

import (
    "fmt"
    "log"
    "github.com/scigolib/hdf5"
)

func main() {
    // Open HDF5 file
    file, err := hdf5.Open("data.h5")
    if err != nil {
        log.Fatal(err)
    }
    defer file.Close()

    // Walk through file structure
    file.Walk(func(path string, obj hdf5.Object) {
        switch v := obj.(type) {
        case *hdf5.Group:
            fmt.Printf("📁 %s (%d children)\n", path, len(v.Children()))
        case *hdf5.Dataset:
            fmt.Printf("📊 %s\n", path)
        }
    })
}

Output:

📁 / (2 children)
📊 /temperature
📁 /experiments/ (3 children)

More examples →


📚 Documentation

Getting Started

Reference

Advanced


⚡ Performance Tuning

When deleting many attributes, B-trees can become sparse (wasted disk space, slower searches). This library offers 4 rebalancing strategies:

1. Default (No Rebalancing)

Fast deletions, but B-tree may become sparse

// No options = no rebalancing (like HDF5 C library)
fw, err := hdf5.CreateForWrite("data.h5", hdf5.CreateTruncate)

Use for: Append-only workloads, small files (<100MB)


2. Lazy Rebalancing (10-100x faster than immediate)

Batch processing: rebalances when threshold reached

fw, err := hdf5.CreateForWrite("data.h5", hdf5.CreateTruncate,
    hdf5.WithLazyRebalancing(
        hdf5.LazyThreshold(0.05),         // Trigger at 5% underflow
        hdf5.LazyMaxDelay(5*time.Minute), // Force rebalance after 5 min
    ),
)

Use for: Batch deletion workloads, medium/large files (100-500MB)

Performance: ~2% overhead, occasional 100-500ms pauses


3. Incremental Rebalancing (ZERO pause)

Background processing: rebalances in background goroutine

fw, err := hdf5.CreateForWrite("data.h5", hdf5.CreateTruncate,
    hdf5.WithLazyRebalancing(),  // Prerequisite!
    hdf5.WithIncrementalRebalancing(
        hdf5.IncrementalBudget(100*time.Millisecond),
        hdf5.IncrementalInterval(5*time.Second),
    ),
)
defer fw.Close()  // Stops background goroutine

Use for: Large files (>500MB), continuous operations, TB-scale data

Performance: ~4% overhead, zero user-visible pause


4. Smart Rebalancing (Auto-Pilot)

Auto-tuning: library detects workload and selects optimal mode

fw, err := hdf5.CreateForWrite("data.h5", hdf5.CreateTruncate,
    hdf5.WithSmartRebalancing(
        hdf5.SmartAutoDetect(true),
        hdf5.SmartAutoSwitch(true),
    ),
)

Use for: Unknown workloads, mixed operations, research environments

Performance: ~6% overhead, adapts automatically


Performance Comparison

| Mode | Deletion Speed | Pause Time | Use Case | |------|----------------|------------|----------| | Default | 100% (baseline) | None | Append-only, small files | | Lazy | 95% (10-100x faster than immediate!) | 100-500ms batches | Batch deletions | | Incremental | 92% | None (background) | Large files, continuous ops | | Smart | 88% | Varies | Unknown workloads |

Learn more:


🎯 Current Status

HDF5 2.0.0 Ready with 88%+ library coverage! 🎉

✅ Fully Implemented

  • File Structure:

    • Superblock parsing (v0, v2, v3) with checksum validation (CRC32)
    • Object headers v1 (legacy HDF5 < 1.8) with continuations
    • Object headers v2 (modern HDF5 >= 1.8) with continuations
    • Groups (traditional symbol tables + modern object headers)
    • B-trees (leaf + non-leaf nodes for large files)
    • Local heaps (string storage)
    • Global Heap (variable-length data)
    • Fractal heap (direct blocks for dense attributes) ✨ NEW
  • Dataset Reading:

    • Compact layout (data in object header)
    • Contiguous layout (sequential storage)
    • Chunked layout with B-tree indexing
    • GZIP/Deflate compression
    • LZF compression (h5py/PyTables compatible) ✨ NEW
    • Filter pipeline for compressed data
  • Datatypes (Read + Write):

    • Basic types: int8-64, uint8-64, float32/64
    • AI/ML types: FP8 (E4M3, E5M2), bfloat16 - IEEE 754 compliant ✨ NEW
    • Strings: Fixed-length (null/space/null-padded), variable-length (via Global Heap)
    • Advanced types: Arrays, Enums, References (object/region), Opaque
    • Compound types: Struct-like with nested members
  • Attributes:

    • Compact attributes (in object header) ✨ NEW
    • Dense attributes (fractal heap foundation) ✨ NEW
    • Attribute reading for groups and datasets ✨ NEW
    • Full attribute API (Group.Attributes(), Dataset.Attributes()) ✨ NEW
  • Navigation: Full file tree traversal via Walk()

  • Code Quality:

    • Test coverage: 88%+ library packages (target: >70%) ✅
    • Lint issues: 0 (34+ linters) ✅
    • TODO items: 0 (all resolved) ✅
    • Official HDF5 test suite: 433 files, 100% pass rate ✅
  • Security ✨ NEW:

    • 4 CVEs fixed (CVE-2025-7067, CVE-2025-6269, CVE-2025-2926, CVE-2025-44905) ✅
    • Overflow protection throughout (SafeMultiply, buffer validation) ✅
    • Security limits: 1GB chunks, 64MB attributes, 16MB strings ✅
    • 39 security test cases, all passing ✅

✍️ Write Support - Feature Complete!

Production-ready write support with all features!

Dataset Operations:

  • ✅ Create datasets (all layouts: contiguous, chunked, compact)
  • ✅ Write data (all datatypes including compound)
  • ✅ Dataset resizing with unlimited dimensions
  • ✅ Variable-length datatypes: strings, ragged arrays
  • ✅ Compression (GZIP, Shuffle, Fletcher32)
  • ✅ Array and enum datatypes
  • ✅ References and opaque types
  • ✅ Attribute writing (dense & compact storage)
  • ✅ Attribute modification/deletion

Links:

  • ✅ Hard links (full support)
  • ✅ Soft links (symbolic references - full support)
  • ✅ External links (cross-file references - full support)

Read Enhancements:

  • ✅ Hyperslab selection (data slicing) - 10-250x faster!
  • ✅ Efficient partial dataset reading
  • ✅ Stride and block support
  • ✅ Chunk-aware reading (reads ONLY needed chunks)
  • ChunkIterator API - Memory-efficient iteration over large datasets

Validation:

  • ✅ Official HDF5 Test Suite: 100% pass rate (378/378 files)
  • ✅ Production quality confirmed

Future Enhancements:

  • ✅ LZF filter (read + write, Pure Go) ✨ NEW
  • ✅ BZIP2 filter (read only, stdlib)
  • ⚠️ SZIP filter (stub - requires libaec)
  • ⚠️ Thread-safety with mutexes + SWMR mode
  • ⚠️ Parallel I/O

Related Skills

View on GitHub
GitHub Stars26
CategoryData
Updated2d ago
Forks3

Languages

Go

Security Score

95/100

Audited on Apr 5, 2026

No findings