Moka
A high performance concurrent caching library for Rust
Install / Use
/learn @moka-rs/MokaREADME
Moka
[!NOTE] If you have any questions about Moka's APIs or internal design, you can ask the AI chatbot at DeepWiki in a natural language: https://deepwiki.com/moka-rs/moka
[!NOTE]
v0.12.0had major breaking changes on the API and internal behavior. Please read the MIGRATION-GUIDE.md for the details.
Moka is a fast, concurrent cache library for Rust. Moka is inspired by the Caffeine library for Java.
Moka provides cache implementations on top of hash maps. They support full concurrency of retrievals and a high expected concurrency for updates.
All caches perform a best-effort bounding of a hash map using an entry replacement algorithm to determine which entries to evict when the capacity is exceeded.
Features
Moka provides a rich and flexible feature set while maintaining high hit ratio and a high level of concurrency for concurrent access.
- Thread-safe, highly concurrent in-memory cache implementations:
- Synchronous caches that can be shared across OS threads.
- An asynchronous (futures aware) cache.
- A cache can be bounded by one of the followings:
- The maximum number of entries.
- The total weighted size of entries. (Size aware eviction)
- Maintains near optimal hit ratio by using an entry replacement algorithms inspired
by Caffeine:
- Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
- Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
- More details and some benchmark results are available here.
- Supports expiration policies:
- Time to live.
- Time to idle.
- Per-entry variable expiration.
- Supports eviction listener, a callback function that will be called when an entry is removed from the cache.
Choosing the right cache for your use case
No cache implementation is perfect for every use cases. Moka is a complex software and can be overkill for your use case. Sometimes simpler caches like Mini Moka or Quick Cache might be a better fit.
The following table shows the trade-offs between the different cache implementations:
| Feature | Moka v0.12 | Mini Moka v0.10 | Quick Cache v0.6 |
|:------- |:---- |:--------- |:----------- |
| Thread-safe, sync cache | ✅ | ✅ | ✅ |
| Thread-safe, async cache | ✅ | ❌ | ✅ |
| Non-concurrent cache | ❌ | ✅ | ✅ |
| Bounded by the maximum number of entries | ✅ | ✅ | ✅ |
| Bounded by the total weighted size of entries | ✅ | ✅ | ✅ |
| Near optimal hit ratio | ✅ TinyLFU | ✅ TinyLFU | ✅ S3-FIFO |
| Per-key, atomic insertion. (e.g. get_with method) | ✅ | ❌ | ✅ |
| Cache-level expiration policies (time-to-live and time-to-idle) | ✅ | ✅ | ❌ |
| Per-entry variable expiration | ✅ | ❌ | ❌ |
| Eviction listener | ✅ | ❌ | ✅ (via lifecycle hook) |
| Lock-free, concurrent iterator | ✅ | ❌ | ❌ |
| Lock-per-shard, concurrent iterator | ❌ | ✅ | ✅ |
| Performance, etc. | Moka v0.12 | Mini Moka v0.10 | Quick Cache v0.6 | |:------- |:---- |:--------- |:----------- | | Small overhead compared to a concurrent hash table | ❌ | ❌ | ✅ | | Does not use background threads | ❌ → ✅ Removed from v0.12 | ✅ | ✅ | | Small dependency tree | ❌ | ✅ | ✅ |
Moka in Production
Moka is powering production services as well as embedded Linux devices like home routers. Here are some highlights:
- crates.io: The official crate registry has been using Moka in its API service to reduce the loads on PostgreSQL. Moka is maintaining cache hit rates of ~85% for the high-traffic download endpoint. (Moka used: Nov 2021 — present)
- aliyundrive-webdav: This WebDAV gateway for a cloud drive may have been deployed in hundreds of home Wi-Fi routers, including inexpensive models with 32-bit MIPS or ARMv5TE-based SoCs. Moka is used to cache the metadata of remote files. (Moka used: Aug 2021 — present)
Recent Changes
[!NOTE]
v0.12.0had major breaking changes on the API and internal behavior. Please read the MIGRATION-GUIDE.md for the details.
Table of Contents
- Features
- Moka in Production
- Change Log
- Supported Platforms
- Usage
- Examples (Part 1)
- Avoiding to clone the value at
get - Example (Part 2)
- Expiration Policies
- Minimum Supported Rust Versions
- Troubleshooting
- Developing Moka
- Road Map
- About the Name
- Credits
- License
Supported Platforms
Moka should work on most 64-bit and 32-bit platforms if Rust std library is
available with threading support. However, WebAssembly (Wasm) and WASI targets are
not supported.
The following platforms are tested on CI:
- Linux 64-bit (x86_64, arm aarch64)
- Linux 32-bit (i646, armv7, armv5, mips)
- If you get compile errors on 32-bit platforms, see troubleshooting.
The following platforms are not tested on CI but should work:
- macOS (arm64)
- Windows (x86_64 msvc and gnu)
- iOS (arm64)
The following platforms are not supported:
- WebAssembly (Wasm) and WASI targets are not supported. (See this project task)
nostdenvironment (platforms withoutstdlibrary) are not supported.- 16-bit platforms are not supported.
Usage
To add Moka to your dependencies, run cargo add as the followings:
# To use the synchronous cache:
cargo add moka --features sync
# To use the asynchronous cache:
cargo add moka --features future
If you want to use the cache under an async runtime such as tokio or async-std, you should specify the future feature. Otherwise, specify the sync feature.
Example: Synchronous Cache
The thread-safe, synchronous caches are defined in the sync module.
Cache entries are manually added using insert or get_with method, and
are stored in the cache until either evicted or manually invalidated.
Here's an example of reading and updating a cache by using multiple threads:
// Use the synchronous cache.
use moka::sync::Cache;
use std::thread;
fn value(n: usize) -> String {
format!("value {n}")
}
fn main() {
const NUM_THREADS: usize = 16;
const NUM_KEYS_PER_THREAD: usize = 64;
// Create a cache that can store up to 10,000 entries.
let cache = Cache::new(10_000);
// Spawn threads and read and update the cache simultaneously.
let threads: Vec<_> = (0..NUM_THREADS)
.map(|i| {
// To share the same cache across the threads, clone it.
// This is a cheap operation.
let my_cache = cache.clone();
let start = i * NUM_KEYS_PER_THREAD;
let end = (i + 1) * NUM_KEYS_PER_THREAD;
thread::spawn(move || {
// Insert 64 entries. (NUM_KEYS_PER_THREAD = 64)
for key in start..end {
my_cache.insert(key, value(key));
// get() returns Option<String>, a clone of the stored value.
assert_eq!(my_cache.get(&key), Some(value(key)));
}
// Invalidate every 4 element of the inserted entries.
for key in (start..end).step_by(4) {
my_cache.invalidate(&key);
}
})
})
.collect();
// Wait for all threads to complete.
threads.into_iter().for_each(|t| t.join().expect("Failed"));
// Verify the result.
for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
if key % 4 == 0 {
assert_eq!(cache.get(&key), None);
} else {
assert_eq!(cache.get(&key), Some(value(key)));
Related Skills
node-connect
341.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
341.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.6kCommit, push, and open a PR
