Gorl
A high-performance, modular rate limiting library for Go. Supports in-memory and Redis backends. Includes Fixed Window, Sliding Window, Token Bucket, and Leaky Bucket algorithms. Flexible keying (IP, API key, token, or custom). Production-ready. Easy integration.
Install / Use
/learn @AliRizaAynaci/GorlREADME
GoRL - High-Performance Rate Limiter Library
GoRL is a high-performance, extensible rate limiter library for Go. It supports multiple algorithms, pluggable storage backends, a metrics collector abstraction, and minimal dependencies, making it ideal for both single-instance and distributed systems.
Table of Contents
- Features
- Installation
- Quick Start
- Usage Examples
- Observability
- Benchmarks
- Storage Backends
- Extending GoRL
- Contributing
- License
Features
- Algorithms: Fixed Window, Sliding Window, Token Bucket, Leaky Bucket
- Storage: In-memory, Redis, or any custom store (via
Storageinterface) - Fail-Open / Fail-Close: Configurable policy on backend errors
- Key Extraction: Built-in strategies (IP, API key) or custom
- Metrics Collector: Optional abstraction for counters and histograms, zero-cost when unused
- Minimal Dependencies: Zero external requirements for in-memory mode
- Middleware Support: Built-in middleware for
net/http, Fiber, Gin, and Echo
Installation
go get github.com/AliRizaAynaci/gorl/v2
Quick Start
import (
"context"
"fmt"
"time"
"github.com/AliRizaAynaci/gorl/v2"
"github.com/AliRizaAynaci/gorl/v2/core"
)
func main() {
limiter, err := gorl.New(core.Config{
Strategy: core.SlidingWindow,
Limit: 5,
Window: 1 * time.Minute,
})
if err != nil {
panic(err)
}
defer limiter.Close()
ctx := context.Background()
for i := 1; i <= 10; i++ {
res, _ := limiter.Allow(ctx, "user-123")
fmt.Printf("Request #%d: allowed=%v, remaining=%d\n", i, res.Allowed, res.Remaining)
}
}
Usage Examples
HTTP Middleware (Built-in)
GoRL ships with a ready-to-use net/http middleware under middleware/http.
Basic Usage (handler wrapping):
import (
"net/http"
"github.com/AliRizaAynaci/gorl/v2"
"github.com/AliRizaAynaci/gorl/v2/core"
mw "github.com/AliRizaAynaci/gorl/v2/middleware/http"
)
limiter, _ := gorl.New(core.Config{
Strategy: core.SlidingWindow,
Limit: 10,
Window: 1 * time.Minute,
})
mux := http.NewServeMux()
mux.Handle("/api/", mw.RateLimit(limiter, mw.Options{
KeyFunc: mw.KeyByIP(),
}, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("OK"))
})))
http.ListenAndServe(":8080", mux)
Middleware Chaining:
rl := mw.NewMiddleware(limiter, mw.Options{
KeyFunc: mw.KeyByHeader("X-API-Key"),
})
mux.Handle("/api/", rl(myHandler))
The middleware automatically sets standard rate-limit headers on every response:
RateLimit-Limit, RateLimit-Remaining, RateLimit-Reset, and Retry-After.
Available Key Extractors:
mw.KeyByIP()— client IP (supportsX-Forwarded-For,X-Real-Ip)mw.KeyByHeader("X-API-Key")— any request headermw.KeyByPath()— IP + request path (per-endpoint limiting)
Fiber
import (
"github.com/gofiber/fiber/v2"
"github.com/AliRizaAynaci/gorl/v2"
"github.com/AliRizaAynaci/gorl/v2/core"
fibermw "github.com/AliRizaAynaci/gorl/v2/middleware/fiber"
)
limiter, _ := gorl.New(core.Config{
Strategy: core.FixedWindow, Limit: 100, Window: time.Minute,
})
app := fiber.New()
app.Use(fibermw.RateLimit(limiter)) // key defaults to c.IP()
app.Listen(":3000")
Gin
import (
"github.com/gin-gonic/gin"
"github.com/AliRizaAynaci/gorl/v2"
"github.com/AliRizaAynaci/gorl/v2/core"
ginmw "github.com/AliRizaAynaci/gorl/v2/middleware/gin"
)
limiter, _ := gorl.New(core.Config{
Strategy: core.SlidingWindow, Limit: 100, Window: time.Minute,
})
r := gin.Default()
r.Use(ginmw.RateLimit(limiter)) // key defaults to c.ClientIP()
r.Run(":8080")
Echo
import (
"github.com/labstack/echo/v4"
"github.com/AliRizaAynaci/gorl/v2"
"github.com/AliRizaAynaci/gorl/v2/core"
echomw "github.com/AliRizaAynaci/gorl/v2/middleware/echo"
)
limiter, _ := gorl.New(core.Config{
Strategy: core.TokenBucket, Limit: 100, Window: time.Minute,
})
e := echo.New()
e.Use(echomw.RateLimit(limiter)) // key defaults to c.RealIP()
e.Start(":8080")
All framework middlewares automatically set
RateLimit-*andRetry-Afterheaders. Pass a customConfig{KeyFunc: ...}to override the default key extraction.
Docker & Redis Backend
docker run --name redis-limiter -p 6379:6379 -d redis
limiter, err := gorl.New(core.Config{
Strategy: core.TokenBucket,
KeyBy: core.KeyByIP,
Limit: 100,
Window: 1 * time.Minute,
RedisURL: "redis://localhost:6379/0",
})
if err != nil {
panic(err)
}
Observability
GoRL provides an optional metrics collector abstraction. Below is an example integrating Prometheus:
import (
"log"
"net/http"
"time"
"github.com/AliRizaAynaci/gorl/v2"
"github.com/AliRizaAynaci/gorl/v2/core"
"github.com/AliRizaAynaci/gorl/v2/metrics"
mw "github.com/AliRizaAynaci/gorl/v2/middleware/http"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func main() {
// Create and register Prometheus collector
pm := metrics.NewPrometheusCollector("gorl", "sliding_window")
metrics.RegisterPrometheusCollectors(pm)
// Initialize limiter with metrics enabled
limiter, err := gorl.New(core.Config{
Strategy: core.SlidingWindow,
Limit: 5,
Window: 1 * time.Minute,
RedisURL: "redis://localhost:6379/0",
Metrics: pm,
})
if err != nil {
log.Fatal(err)
}
defer limiter.Close()
// Expose Prometheus metrics endpoint
http.Handle("/metrics", promhttp.Handler())
// Application handler with rate limiting middleware
http.Handle("/api", mw.RateLimitFunc(limiter, mw.Options{
KeyFunc: mw.KeyByHeader("X-API-Key"),
}, func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("OK"))
}))
log.Println("Listening on :8080")
log.Fatal(http.ListenAndServe(":8080", nil))
}
Benchmarks
Benchmarks run on AMD Ryzen 7 4800H.
| Algorithm | Single Key (ns/op, B/op, allocs) | Multi Key (ns/op, B/op, allocs) | | -------------- | ------------------------------------ | ------------------------------------ | | Fixed Window | 103.8 ns/op, 16 B/op, 1 allocs/op | 232.5 ns/op, 30 B/op, 2 allocs/op | | Sliding Window | 372.3 ns/op, 64 B/op, 3 allocs/op | 625.4 ns/op, 86 B/op, 4 allocs/op | | Token Bucket | 634.4 ns/op, 208 B/op, 8 allocs/op | 955.8 ns/op, 222 B/op, 9 allocs/op | | Leaky Bucket | 515.2 ns/op, 208 B/op, 8 allocs/op | 916.0 ns/op, 222 B/op, 9 allocs/op |
Storage Backends
GoRL's storage layer uses a minimal key-value interface.
package storage
import (
"context"
"time"
)
type Storage interface {
// Incr atomically increments the value at key by 1, initializing to 1 if missing or expired.
Incr(ctx context.Context, key string, ttl time.Duration) (float64, error)
// Get retrieves the numeric value at key, returning 0 if missing or expired.
Get(ctx context.Context, key string) (float64, error)
// Set stores the numeric value at key with the specified TTL.
Set(ctx context.Context, key string, val float64, ttl time.Duration) error
// Close releases any resources held by the storage backend.
Close() error
}
In-Memory Store
Lock-free implementation using sync.Map and sync/atomic:
store := inmem.NewInMemoryStore()
- Use case: single-instance and unit tests
- Expiration: TTL on each write, background GC cleanup
- Concurrency: lock-free via atomic CAS operations
Redis Store
Scalable store leveraging Redis commands:
store := redis.NewRedisStore("redis://localhost:6379/0")
- Counter:
INCR+EXPIRE - TTL Management: reset expire on each write
- Use case: distributed services
Custom Storage Backend
By default, gorl.New(cfg core.Config) wires up:
- Redis (if
cfg.RedisURLis set) - In-memory (otherwise)
To add any other storage backend (JetStream, DynamoDB, etc.) without forking the repo, follow these steps:
-
Create a sub-package
github.com/AliRizaAynaci/gorl/v2/storage/yourmoduleand implement thestorage.Storageinterface:// github.com/AliRizaAynaci/gorl/v2/storage/yourmodule/store.go package yourmodule import ( "context" "time" "github.com/AliRizaAynaci/gorl/v2/storage" ) // YourModuleStore holds your connection fields. type YourModuleStore struct { // e.g. client, context } // NewYourModuleStore constructs your store with any parameters. func NewYourModuleStore(/* params */) *YourModuleStore { return &YourModuleStore{/* initialize fields */} } func (s *YourModuleStore) Incr(ctx context.Context, key string, ttl time.Duration) (float64, error) { // increment logic } func (s *YourModuleStore) Get(ctx context.Context, key string) (float64, error) { // get logic } func (s *YourModuleStore) Set(ctx context.Context, key string, val float64, ttl time.Duration) error { // set logic } func (s *YourModuleStore) Close() error { // cleanup logic } -
Extend
core.Configingorl/core/config.go:type Config struct { Strategy StrategyType Limit float64 Window time.Duration RedisURL string YourModuleURL string // ← new field Metrics Metrics } -
Wire your store in
gorl/limiter.go:func New(cfg core.Config) (core.Limiter, error) { if cfg.Metrics == nil { cfg.Metrics = &core.NoopMetrics{} } var store storage.Storage switch { case cfg.YourModuleURL != "": store = yourmodule.NewYourModuleStore(cfg.YourModuleURL) case cfg.RedisURL != "": store = red
