Bloom
:cherry_blossom: HTTP REST API caching middleware, to be used between load balancers and REST API workers.
Install / Use
/learn @valeriansaliou/BloomREADME
Bloom
Bloom is a REST API caching middleware, acting as a reverse proxy between your load balancers and your REST API workers.
It is completely agnostic of your API implementation, and requires minimal changes to your existing API code to work.
Bloom relies on redis, configured as a cache to store cached data. It is built in Rust and focuses on stability, performance and low resource usage.
Important: Bloom works great if your API implements REST conventions. Your API needs to use HTTP read methods, namely GET, HEAD, OPTIONS solely as read methods (do not use HTTP GET parameters as a way to update data).
Tested at Rust version: rustc 1.91.1 (ed61e7d7e 2025-11-07)
🇫🇷 Crafted in Brest, France.
:newspaper: The Bloom project was initially announced in a post on my personal journal.

Who uses it?
<table> <tr> <td align="center"><a href="https://crisp.chat/"><img src="https://valeriansaliou.github.io/bloom/images/crisp.png" width="64" /></a></td> </tr> <tr> <td align="center">Crisp</td> </tr> </table>👋 You use Bloom and you want to be listed there? Contact me.
Features
- The same Bloom server can be used for different API workers at once, using HTTP header
Bloom-Request-Shard(eg. Main API uses shard0, Search API uses shard1). - Cache stored on buckets, specified in your REST API responses using HTTP header
Bloom-Response-Buckets. - Cache clustered by authentication token, no cache leak across users is possible, using the standard
AuthorizationHTTP header. - Cache can be expired directly from your REST API workers, via a control channel.
- Configurable per-request caching strategy, using
Bloom-Request-*HTTP headers in the requests your Load Balancers forward to Bloom.- Specify caching shard for an API system with
Bloom-Request-Shard(default shard is0, maximum value is15).
- Specify caching shard for an API system with
- Configurable per-response caching strategy, using
Bloom-Response-*HTTP headers in your API responses to Bloom.- Disable all cache for an API route with
Bloom-Response-Ignore(with value1). - Specify caching buckets for an API route with
Bloom-Response-Buckets(comma-separated if multiple buckets). - Specify caching TTL in seconds for an API route with
Bloom-Response-TTL(other than default TTL, number in seconds).
- Disable all cache for an API route with
- Serve
304 Not Modifiedto non-modified route contents, lowering bandwidth usage and speeding up requests to your users.
The Bloom Approach
Bloom can be hot-plugged to sit between your existing Load Balancers (eg. NGINX), and your API workers (eg. NodeJS). It has been initially built to reduce the workload and drastically reduce CPU usage in case of API traffic spike, or DOS / DDoS attacks.
A simpler caching approach could have been to enable caching at the Load Balancer level for HTTP read methods (GET, HEAD, OPTIONS). Although simple as a solution, it would not work with a REST API. REST API serve dynamic content by nature, that rely heavily on Authorization headers. Also, any cache needs to be purged at some point, if the content in cache becomes stale due to data updates in some database.
NGINX Lua scripts could do that job just fine, you say! Well, I firmly believe Load Balancers should be simple, and be based on configuration only, without scripting. As Load Balancers are the entry point to all your HTTP / WebSocket services, you'd want to avoid frequent deployments and custom code there, and handoff that caching complexity to a dedicated middleware component.
How does it work?
Bloom is installed on the same server as each of your API workers. As seen from your Load Balancers, there is a Bloom instance per API worker. This way, your Load Balancing setup (eg. Round-Robin with health checks) is not broken. Each Bloom instance can be set to be visible from its own LAN IP your Load Balancers can point to, and then those Bloom instances can point to your API worker listeners on the local loopback.
Bloom acts as a Reverse Proxy of its own, and caches read HTTP methods (GET, HEAD, OPTIONS), while directly proxying HTTP write methods (POST, PATCH, PUT and others). All Bloom instances share the same cache storage on a common redis instance available on the LAN.
Bloom is built in Rust for memory safety, code elegance and especially performance. Bloom can be compiled to native code for your server architecture.
Bloom has minimal static configuration, and relies on HTTP response headers served by your API workers to configure caching on a per-response basis. Those HTTP headers are intercepted by Bloom and not served to your Load Balancer responses. Those headers are formatted as Bloom-Response-*. Upon serving response to your Load Balancers, Bloom sets a cache status header, namely Bloom-Status which can be seen publicly in HTTP responses (either with value HIT, MISS or DIRECT — it helps debug your cache configuration).

How to use it?
Installation
Bloom is built in Rust. To install it, either download a version from the Bloom releases page, use cargo install or pull the source code from master.
Install from source:
If you pulled the source code from Git, you can build it using cargo:
cargo build --release
You can find the built binaries in the ./target/release directory.
Install from Cargo:
You can install Bloom directly with cargo install:
cargo install bloom-server
Ensure that your $PATH is properly configured to source the Crates binaries, and then run Bloom using the bloom command.
Install from packages:
Debian & Ubuntu packages are also available. Refer to the How to install it on Debian & Ubuntu? section.
Install from Docker Hub:
You might find it convenient to run Bloom via Docker. You can find the pre-built Bloom image on Docker Hub as valeriansaliou/bloom.
First, pull the valeriansaliou/bloom image:
docker pull valeriansaliou/bloom:v1.36.0
Then, seed it a configuration file and run it (replace /path/to/your/bloom/config.cfg with the path to your configuration file):
docker run -p 8080:8080 -p 8811:8811 -v /path/to/your/bloom/config.cfg:/etc/bloom.cfg valeriansaliou/bloom:v1.36.0
In the configuration file, ensure that:
server.inetis set to0.0.0.0:8080(this lets Bloom be reached from outside the container)control.inetis set to0.0.0.0:8811(this lets Bloom Control be reached from outside the container)
Bloom will be reachable from http://localhost:8080, and Bloom Control will be reachable from tcp://localhost:8811.
Configuration
Use the sample config.cfg configuration file and adjust it to your own environment.
Make sure to properly configure the [proxy] section so that Bloom points to your API worker host and port.
Available options
Available configuration options are commented below, with allowed values:
[server]
log_level(type: string, allowed:debug,info,warn,error, default:error) — Verbosity of logging, set it toerrorin productioninet(type: string, allowed: IPv4 / IPv6 + port, default:[::1]:8080) — Host and TCP port the Bloom server should listen on
[control]
inet(type: string, allowed: IPv4 / IPv6 + port, default:[::1]:8811) — Host and TCP port Bloom Control should listen ontcp_timeout(type: integer, allowed: seconds, default:300) — Timeout of idle/dead client connections to Bloom Control
[proxy]
shard_default(type: integer, allowed:0to15, default:0) — Default shard index to use when no shard is specified in proxied HTTP requests
[[proxy.shard]]
shard(type: integer, allowed:0to15, default:0) — Shard index (routed usingBloom-Request-Shardin requests to Bloom)host(type: string, allowed: hostname, IPv4, IPv6, default:localhost) — Target host to proxy to for this shard (ie. where the API listens)port(type: integer, allowed: TCP port, default:3000) — Target TCP port to proxy to for this shard (ie. where the API listens)
[cache]
ttl_default(type: integer, allowed: seconds, default:600) — Default cache TTL in seconds, when noBloom-Response-TTLprovidedexecutor_pool(type: integer, allowed:0to(2^16)-1, default:16) — Cache executor pool size (how many cache requests can execute at the same time)disable_read(type: boolean, allowed:true,false, default:false) — Whether to disable cache reads (useful for testing)disable_write(type: boolean, allowed:true,false, default:false) — Whether to disable cache writes (useful for testing)compress_body(type: boolean, allowed:true,false, default:true) — Whether to compress body upon store (using Brotli; usually red
Related Skills
bluebubbles
340.5kUse when you need to send or manage iMessages via BlueBubbles (recommended iMessage integration). Calls go through the generic message tool with channel="bluebubbles".
gh-issues
340.5kFetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]
healthcheck
340.5kHost security hardening and risk-tolerance configuration for OpenClaw deployments
himalaya
340.5kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
