Minicache
Distributed cache with client-side consistent hashing, distributed leader-elections, and dynamic node discovery. Supports both REST and gRPC interfaces secured with mTLS.
Install / Use
/learn @danielvegamyhre/MinicacheREADME
minicache
Distributed cache implemented in Go. Like Redis but simpler. Features include:
- Client-side consistent hashing to support fault-tolerance by minimizing the number of key re-remappings required in the event of node failure
- Dynamic node discovery enabling arbitrary cluster sizes
- Distributed leader election via Bully algorithm and leader heartbeat monitors which ensure no single-point of failure
- Both HTTP/gRPC interfaces for gets/puts
- Supports mTLS secured communication
- Dockerfiles to support containerized deployments
Contents
Features
Thread-safe LRU cache with O(1) operations
- Least-recently-used eviction policy with a configurable cache capacity ensures low cache-miss rate
- Get/Put operations and eviction run all run in O(1) time
- LRU cache implementation is made thread safe by use of Go synchronization primitives
Consistent Hashing
- Client uses consistent hashing to uniformly distribute requests and minimize required re-mappings when servers join/leave the cluster
- Client automatically monitors the cluster state stored on the leader node for any changes and updates its consistent hashing ring accordingly
Distributed leader election algorithm
- Bully election algorithm used to elect a leader node for the cluster, which is in charge of monitoring the state of the nodes in the cluster to provide to clients so they can maintain a consistent hashing ring and route requests to the correct nodes
- Follower nodes monitor heartbeat of leader and run a new election if it goes down
Dynamic node discovery
- New nodes join the cluster when they come online by first registering themselves with the cluster, which is done by sending identifying information (hostname, port, etc.) to each of the cluster's original "genesis" nodes (i.e. nodes defined in the config file used at runtime) until one returns a successful response.
- When an existing node receives this registration request from the new node, it will add the new node to its in-memory list of nodes and send this updated list to all other nodes.
- The leader node monitors heartbeats of all nodes in the cluster, keeping a list of active reachable nodes in the cluster updated in real-time.
- Clients monitor the leader's cluster config for changes and updates their consistent hashing ring accordingly. Time to update cluster state after node joins/leaves cluster is <= 1 second.
No single point of failure
- The distributed election algorithm allows any nodes to arbitrarily join/leave cluster at any time, and there is always guaranteed to be a leader tracking the state of nodes in the cluster to provide to clients for consistent hashing.
Supports both REST API and gRPC
- Make gets/puts with the simple familiar interfaces of HTTP/gRPC
mTLS for maximum security
- minicache uses mutual TLS, with mutual authentication between client and server for maximum security.
Performance
Test environment:
- 2013 MacBook Pro
- Processor: 2.4 GHz Intel Core i5
- Memory: 8 GB 1600 MHz DDR3
Performance test output
$ GIN_MODE=release sudo go test -v main_test.go
=== RUN Test10kGrpcPuts
main_test.go:60: Time to complete 10k puts via gRPC API: 521.057551ms
main_test.go:61: Cache misses: 0/10,000 (0.000000%)
--- PASS: Test10kGrpcPuts (3.85s)
=== RUN Test10kRestApiPuts
main_test.go:118: Time to complete 10k puts via REST: 2.596501161s
main_test.go:119: Cache misses: 0/10,000 (0.000000%)
--- PASS: Test10kRestApiPuts (3.11s)
=== RUN Test10kRestApiPutsInsecure
main_test.go:175: Time to complete 10k puts via REST: 7.675285188s
main_test.go:176: Cache misses: 0/10,000 (0.000000%)
--- PASS: Test10kRestApiPutsInsecure (10.72s)
1. LRU Cache implementation ran directly by a test program:
Test: 10 million puts calling a LRU cache with capacity of 10,000 directly in memory:
$ go test -v ./lru
=== RUN TestCacheWriteThroughput
lru_test.go:19.go:19: Time to complete 10M puts: 3.869112083s
lru_test.go:20: LRU Cache write throughput: 2584572.321887 puts/second
Result: 2.58 million puts/second
2. Distributed cache running locally with storage via gRPC calls over local network
Test: 10,000 items stored in cache via gRPC calls when running 4 cache servers on localhost with capacity of 100 items each, when all servers stay online throughout the test
$ go test -v main_test.go
...
main_test.go:114: Time to complete 10k puts via gRPC: 588.774872ms
Result: ~17,000 puts/second
3. Distributed cache storage running in Docker containers with storage via gRPC calls:
Test: 10,000 items stored in cache via gRPC calls when running 4 cache servers on localhost with capacity of 100 items each, when all servers stay online throughout the test
# docker run --network minicache_default cacheclient
...
client_docker_test.go:95: Time to complete 10k puts via REST API: 8.6985474s
Result: 1150 puts/second
Testing
1. Unit tests
- LRU Cache implementation has exhaustive unit tests for correctness in all possible scenarios (see lru_test.go)
- Run the unit tests with the command
go test -v ./lru - Consistent hashing ring contains exhaustive unit tests for various scenarios (see ring_test.go)
- Run these unit tests with the command
go test -v ./ring
2. Integration tests (local)
Run the integration tests with the command go test -v main_test.go, which performs the following steps:
- Spins up multiple cache server instances locally on different ports (see nodes-local-with-mTLS.json config file)
- Creates cache client
- Runs 10 goroutines which each send 1000 requests to put items in the distributed cache via REST API endpoint
- Runs 10 goroutines which each send 1000 requests to put items in the distributed cache via gRPC calls
- After each test, displays % of cache misses (which in this case, is when the client is simply unable to store an item in the distributed cache)
- Repeats steps 1-5 using the nodes-local-insecure.json config file, to test the pure HTTP implementation (no TLS or mTLS).
3. Integration tests (Docker)
If you have Docker and Docker Compose installed, you can run the test script ./docker-test.sh which performs the following steps:
Related Skills
openhue
339.1kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
339.1kElevenLabs text-to-speech with mac-style say UX.
weather
339.1kGet current weather and forecasts via wttr.in or Open-Meteo
tweakcc
1.5kCustomize Claude Code's system prompts, create custom toolsets, input pattern highlighters, themes/thinking verbs/spinners, customize input box & user message styling, support AGENTS.md, unlock private/unreleased features, and much more. Supports both native/npm installs on all platforms.
