Gohotpool
A PostgreSQL-inspired byte buffer pool for Go with advanced eviction strategies, pin/usage count mechanisms, and dirty buffer tracking.
Install / Use
/learn @MYK12397/GohotpoolREADME
GoHotPool
A PostgreSQL-inspired byte buffer pool for Go with advanced eviction strategies, pin/usage count mechanisms, and dirty buffer tracking.
Keep your hot data hot, sweep the cold away.
Features
🎯 PostgreSQL-Inspired Design
- Clock Sweep Eviction: Smart eviction algorithm that keeps frequently-used buffers in memory
- Pin/Usage Count Mechanism: Prevents active buffers from being evicted and tracks access patterns
- Dirty Buffer Tracking: Monitors which buffers have been modified
- Ring Buffers: Isolated buffer pools for bulk operations to prevent cache pollution
📊 Rich Statistics
- Cache hit/miss rates
- Eviction counts and clock sweep operations
- Per-P cache hit tracking
- Dirty and pinned buffer tracking
⚡ Performance (v2.0 - Optimized)
- Per-P (processor) local caching - Like sync.Pool, near-zero contention
- Lock-free fast path using atomic operations
- Sharded pool to reduce mutex contention
- Zero allocation for buffer reuse
- Near sync.Pool performance in single-goroutine scenarios
Installation
go get github.com/MYK12397/gohotpool
Quick Start
package main
import (
"github.com/MYK12397/gohotpool"
)
func main() {
// Use default pool
buf := gohotpool.Get()
defer gohotpool.Put(buf)
buf.WriteString("Hello, World!")
println(buf.String())
}
Advanced Usage
Custom Pool Configuration
config := gohotpool.Config{
PoolSize: 1024, // Number of buffers
ShardCount: 16, // Number of shards (power of 2)
EnableRingBuffer: true, // Enable ring buffer for bulk ops
RingBufferSize: 32, // Ring buffer size
DefaultBufferSize: 4096, // Initial buffer capacity (4KB)
TrackStats: false, // Disable for max performance
}
pool := gohotpool.NewPool(config)
Clock Sweep Eviction
The pool uses PostgreSQL's clock sweep algorithm to intelligently evict cold buffers while protecting hot ones:
pool := gohotpool.NewPool(gohotpool.DefaultConfig())
// Frequently accessed buffers get higher usage counts
for i := 0; i < 10; i++ {
buf := pool.Get()
buf.WriteString("hot data")
pool.Put(buf)
buf = pool.Get() // Same buffer likely returned, usage count increases
}
// Less frequently accessed buffers are evicted first
How it works:
- Each buffer has a usage count (0-5)
- On access, usage count increases (capped at 5)
- During eviction, clock sweeps through buffers
- Unpinned buffers with usage count > 0 are decremented
- Buffers with usage count = 0 are evicted
Pin/Unpin Protection
Protect buffers from eviction while in use:
buf := pool.Get()
// Prevent eviction during critical operation
pool.Pin(buf)
defer pool.Unpin(buf)
// Do long-running work...
performSlowOperation(buf)
pool.Put(buf)
Dirty Buffer Tracking
Track which buffers have been modified:
buf := pool.Get()
buf.WriteString("data")
println(buf.IsDirty()) // true
buf.Reset()
println(buf.IsDirty()) // false
pool.Put(buf)
Ring Buffers for Bulk Operations
Prevent bulk operations from polluting the main cache:
config := gohotpool.Config{
PoolSize: 1000,
EnableRingBuffer: true,
RingBufferSize: 50, // Small ring for bulk ops
}
pool := gohotpool.NewPool(config)
// Bulk insert - uses ring buffer
for i := 0; i < 10000; i++ {
buf := pool.GetCold()
buf.WriteString(fmt.Sprintf("record %d", i))
// Write to disk/network...
pool.PutCold(buf) // Return to ring buffer
}
// Main cache remains hot for regular queries
stats := pool.GetStats()
fmt.Printf("Main pool evictions: %d\n", stats.Evictions) // Very low
Real-World Examples
HTTP Response Buffering
pool := gohotpool.NewPool(gohotpool.DefaultConfig())
func handleRequest(w http.ResponseWriter, r *http.Request) {
buf := pool.Get()
defer pool.Put(buf)
// Build response
buf.WriteString("HTTP/1.1 200 OK\r\n")
buf.WriteString("Content-Type: application/json\r\n\r\n")
json.NewEncoder(buf).Encode(responseData)
w.Write(buf.Bytes())
}
CSV Processing
func processLargeCSV(filename string) error {
pool := gohotpool.NewPool(gohotpool.Config{
EnableRingBuffer: true,
RingBufferSize: 100,
})
file, _ := os.Open(filename)
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
// Use ring buffer for bulk processing
buf := pool.GetCold()
buf.WriteString(scanner.Text())
// Process row...
processRow(buf.Bytes())
pool.PutCold(buf)
}
return nil
}
Log Aggregation
type LogAggregator struct {
pool *gohotpool.Pool
}
func (la *LogAggregator) AddLog(message string) {
buf := la.pool.Get()
defer la.pool.Put(buf)
// Format log entry
buf.WriteString(time.Now().Format(time.RFC3339))
buf.WriteString(" ")
buf.WriteString(message)
// Frequently accessed recent logs stay in cache
la.storage.Write(buf.Bytes())
}
Statistics and Monitoring
config := gohotpool.DefaultConfig()
config.TrackStats = true // Enable stats (disabled by default for performance)
pool := gohotpool.NewPool(config)
// ... use pool ...
stats := pool.GetStats()
fmt.Printf("Cache hit rate: %.2f%%\n",
float64(stats.Hits)/float64(stats.Hits+stats.Misses)*100)
fmt.Printf("Per-P cache hits: %d\n", stats.PerPHits)
fmt.Printf("Evictions: %d\n", stats.Evictions)
fmt.Printf("Clock sweeps: %d\n", stats.ClockSweeps)
fmt.Printf("Ring buffer uses: %d\n", stats.RingBufferUses)
Performance Comparison
Benchmark Results (Apple M4 Pro, Go 1.23.1)
goos: darwin
goarch: arm64
cpu: Apple M4 Pro
Benchmark ns/op allocs/op vs sync.Pool
------------------------------------------------------------------------
BenchmarkOptimized_NoStats_Parallel 5.2 0 ~4x slower
BenchmarkOptimized_PerPCache_Parallel 4.5 0 ~4x slower
BenchmarkSyncPool_Parallel 1.2 0 baseline
BenchmarkOptimized_SingleGoroutine 7.7 0 SAME SPEED! ✓
BenchmarkSyncPool_SingleGoroutine 7.5 0 baseline
BenchmarkOptimized_HTTPResponse 7.9 0 ~4x slower
BenchmarkSyncPool_HTTPResponse 2.0 0 baseline
BenchmarkOptimized_BulkCSV 4.9 0 FASTER! ✓
BenchmarkSyncPool_BulkCSV 8.5 0 baseline
BenchmarkOptimized_HighContention 4.3 0 ~4x slower
BenchmarkOptimized_LowContention 5.3 0 ~4x slower
BenchmarkOptimized_ManyShards 6.2 0 ~5x slower
Performance Highlights 🚀
| Scenario | gohotpool | sync.Pool | Winner | |----------|-----------|-----------|--------| | Single-goroutine | 7.7 ns | 7.5 ns | Tie! | | Bulk CSV | 4.9 ns | 8.5 ns | gohotpool | | High contention | 4.3 ns | ~1 ns | sync.Pool | | Parallel (no stats) | 5.2 ns | 1.2 ns | sync.Pool |
📊 When Each Pool Makes Sense
| Use Case | Best Choice | Why | |----------|-------------|-----| | Single-goroutine workloads | Either | Same performance! | | Bulk/batch operations | gohotpool | Ring buffer + faster in benchmarks | | Hot data workload | gohotpool | Keeps frequently-accessed buffers | | Need statistics | gohotpool | Built-in metrics, sync.Pool has none | | Predictable memory | gohotpool | Fixed pool size, sync.Pool varies with GC | | Maximum parallel throughput | sync.Pool | ~4x faster under heavy contention |
✅ When to Use GoHotPool
Use gohotpool when you NEED:
- Observable pooling - Statistics, monitoring, debugging
- Ring buffer isolation - Bulk operations that would pollute cache
- Predictable memory - Fixed pool size, not GC-dependent
- Hot/cold separation -
Get()for hot,GetCold()for bulk - Pin protection - Prevent eviction during critical operations
- Single-goroutine workloads - Same speed as sync.Pool!
Use sync.Pool when:
- Maximum parallel throughput is critical
- Simple pooling is sufficient
- GC-based lifecycle is acceptable
- Don't need statistics or smart eviction
🎯 The Honest Trade-off
// sync.Pool: Fast parallel, opaque
pool := &sync.Pool{New: func() interface{} { return &bytes.Buffer{} }}
buf := pool.Get().(*bytes.Buffer) // ~1ns parallel, ~7.5ns single
defer pool.Put(buf)
// gohotpool: Same speed single-threaded, intelligent features
buf := gohotpool.Get() // ~5ns parallel, ~7.7ns single
defer gohotpool.Put(buf)
stats := gohotpool.GetStats() // Know what's happening
This version includes major performance improvements:
- Per-P (processor) local caching - Lock-free fast path like sync.Pool
- Atomic swap/CAS operations - Race-free per-P cache access
- Removed time.Now() calls - Saved ~30ns per operation
- Stats disabled by default - 34x performance improvement
- Fixed data races
- Fixed bugs
API Reference
Pool Methods
Get() *Buffer- Get a buffer from the pool (uses per-P cache fast path)Put(buf *Buffer)- Return buffer to poolGetCold() *Buffer- Get buffer from ring buffer (for bulk operations)PutCold(buf *Buffer)- Return buffer to ring bufferPin(buf *Buffer)- Prevent buffer evictionUnpin(buf *Buffer)- Allow buffer evictionMarkDirty(buf *Buffer)- Mark buffer as modifiedGetStats() PoolStats- Get pool statistics
Buffer Methods
Write(p []byte) (int, error)- Append bytesWriteString(s string) (int, error)- Append stringWriteByte(c byte) error- Append single byteBytes() []byte- Get buffer contentsString() string- Get as
Related Skills
feishu-drive
344.1k|
things-mac
344.1kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
clawhub
344.1kUse the ClawHub CLI to search, install, update, and publish agent skills from clawhub.com
postkit
PostgreSQL-native identity, configuration, metering, and job queues. SQL functions that work with any language or driver
