SkillAgentSearch skills...

Gcache

Fine-grained caching framework

Install / Use

/learn @rungalileo/Gcache
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="assets/logo-wordmark.png" /> <source media="(prefers-color-scheme: light)" srcset="assets/logo-wordmark-light.png" /> <img src="assets/logo-wordmark.png" alt="GCache" width="520" /> </picture> </p> <p align="center"> <a href="https://badge.fury.io/py/gcache"><img src="https://badge.fury.io/py/gcache.svg" alt="PyPI version" /></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT" /></a> <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="Python 3.10+" /></a> <a href="https://codecov.io/gh/rungalileo/gcache"><img src="https://codecov.io/gh/rungalileo/gcache/graph/badge.svg" alt="codecov" /></a> </p>

A caching library built for moving fast without breaking things. GCache lets you rapidly add new caching use cases while maintaining structure and runtime control guardrails—so you can ramp up gradually, kill a bad cache instantly, and have full observability into what's cached across your system.

Why GCache?

Most caching libraries give you a key-value store and leave the rest to you. GCache takes a different approach:

  • Opinionated structure — Enforced key format (key_type + ID + use case, e.g., user_id:123) keeps your caching organized and enables the features below
  • Runtime controls — Enable/disable caching per request, ramp from 0-100% per use case, adjust configuration without redeploying
  • Targeted invalidation — Invalidate all cache entries for a key_type + ID (e.g., all caches for a specific user, org, or project) with one call
  • Full observability — Prometheus metrics out of the box, broken down by use case and key_type

Installation

pip install gcache

Requires Python 3.10+

Quick Start

from gcache import GCache, GCacheConfig, GCacheKeyConfig, CacheLayer

# Create the cache instance (singleton)
gcache = GCache(GCacheConfig())

# Decorate your function
@gcache.cached(
    key_type="user_id",
    id_arg="user_id",
    use_case="GetUser",
    default_config=GCacheKeyConfig(
        ttl_sec={CacheLayer.LOCAL: 60, CacheLayer.REMOTE: 300},
        ramp={CacheLayer.LOCAL: 100, CacheLayer.REMOTE: 100},
    ),
)
async def get_user(user_id: str) -> dict:
    return await db.fetch_user(user_id)  # Your expensive operation

# Use it — caching only happens inside enable() blocks
with gcache.enable():
    user = await get_user("123")  # Cache key: urn:gcache:user_id:123#GetUser

That's it. The function works normally outside enable() blocks, and caches results inside them.

How It Works

Cache Layers

GCache uses a multi-layer read-through cache:

Request
   │
   ▼
┌─────────────────┐
│  LOCAL CACHE    │ ◄─── Hit? Return immediately
│  (in-memory)    │
└────────┬────────┘
         │ Miss
         ▼
┌─────────────────┐
│  REDIS CACHE    │ ◄─── Hit? Store in local, return
│  (distributed)  │
└────────┬────────┘
         │ Miss
         ▼
┌─────────────────┐
│  YOUR FUNCTION  │ ◄─── Execute, store in both caches, return
└─────────────────┘

Local cache is fast but per-instance. Redis is shared across your fleet. Use both for best performance, or just local if you don't need Redis.

Key Format

GCache constructs structured cache keys in URN format:

urn:prefix:key_type:id?arg1=val1&arg2=val2#use_case

For example: urn:gcache:user_id:123?page=1#GetUserPosts

This structure is useful for:

  • Debugging — Keys are human-readable when inspecting Redis
  • Grouping — All caches for a key_type:id pair share a common prefix, making it easy to find related entries
  • Targeted invalidation — The structure enables invalidating all entries for a specific key_type + ID

Runtime Controls

Caching doesn't happen automatically—you control when it's active:

  • enable() context — Caching only happens inside with gcache.enable(): blocks. Outside of them, your function runs normally. This lets you disable caching during write operations to avoid stale reads.

  • ramp percentage — Each cache layer has a ramp from 0-100%. At 50%, half the requests use the cache, half go straight to the source. Start at 0% when adding a new use case, then ramp up as you gain confidence.

  • Dynamic config — The config provider runs on each request, so you can adjust TTLs or ramp percentages without redeploying.

Why Explicit enable()?

GCache requires you to explicitly enable caching with with gcache.enable():. This is intentional.

Caching in write paths can cause subtle bugs—a stale read might get cached right before a write, leading to inconsistent data. By requiring explicit opt-in, GCache forces you to consciously decide where caching is safe:

# Read path — caching is safe
with gcache.enable():
    user = await get_user(user_id)

# Write path — no caching, function runs normally
await update_user(user_id, new_data)
await gcache.ainvalidate("user_id", user_id)

This design prevents accidental caching in dangerous places.

Runtime Configuration

For dynamic control, provide a config provider when creating GCache. This lets you adjust caching behavior without redeploying:

from gcache import GCache, GCacheConfig, GCacheKeyConfig, GCacheKey, CacheLayer

async def config_provider(key: GCacheKey) -> GCacheKeyConfig | None:
    # Fetch from your config source: LaunchDarkly, database, config file, etc.
    config = await config_service.get_cache_config(key.use_case)

    if config is None:
        return None  # Fall back to default_config on the decorator

    return GCacheKeyConfig(
        ttl_sec={CacheLayer.LOCAL: config.local_ttl, CacheLayer.REMOTE: config.remote_ttl},
        ramp={CacheLayer.LOCAL: config.local_ramp, CacheLayer.REMOTE: config.remote_ramp},
    )

gcache = GCache(GCacheConfig(cache_config_provider=config_provider))

This enables:

  • Kill switches — Set ramp to 0% to instantly disable a problematic cache
  • Gradual rollout — Start at 10%, monitor metrics, increase to 100%
  • Per-use-case tuning — Different TTLs and ramp percentages for different use cases

The @cached Decorator

The decorator handles both sync and async functions automatically.

Basic Usage

@gcache.cached(
    key_type="user_id",           # What kind of entity is this?
    id_arg="user_id",             # Which argument contains the ID?
    use_case="GetUserProfile",    # Identifies this specific caching use case
)
async def get_user_profile(user_id: str) -> dict:
    ...

Tip: Always define use_case explicitly. It identifies the specific caching scenario (e.g., GetUserProfile, ListOrgProjects) and appears in cache keys, metrics, and logs. It defaults to module.function_name, but an explicit name ensures consistency if you refactor your code.

Working with Complex Arguments

Options for mapping function arguments to cache keys.

id_arg (required)

Specifies which argument contains the entity ID for the cache key.

String form — use when the argument itself is the ID:

id_arg="user_id"  # user_id argument is the ID

Tuple form — use when the ID needs to be extracted from an object:

id_arg=("user", lambda u: u.id)  # Extract ID from User object

arg_adapters

Converts complex arguments to strings for the cache key. Only needed for non-primitive types.

arg_adapters={
    "filters": lambda f: f.to_cache_key(),  # Complex object
    "page": str,                             # Simple conversion
}

ignore_args

Excludes arguments that don't affect the cached result.

ignore_args=["db_session", "logger"]

Example

@gcache.cached(
    key_type="user_id",
    id_arg=("user", lambda u: u.id),
    arg_adapters={"filters": lambda f: f.to_cache_key()},
    ignore_args=["db_session", "logger"],
)
async def search_user_posts(
    user: User,
    filters: SearchFilters,
    page: int,
    db_session: Session,
    logger: Logger,
) -> list[Post]:
    ...

# Cache key: urn:gcache:user_id:123?filters=active&page=2#SearchUserPosts

The id_arg becomes :123, arg_adapters produce ?filters=active&page=2, and ignore_args are excluded.

Sync Functions Work Too

@gcache.cached(key_type="org_id", id_arg="org_id", use_case="GetOrgSettings")
def get_org_settings(org_id: str) -> dict:  # No async needed
    return db.query(...)

Under the hood, sync functions run through a thread pool to avoid blocking the event loop. This adds some overhead, so prefer async functions when possible for better performance.

Redis Configuration

No Redis (Local Only)

gcache = GCache(GCacheConfig())

With Redis

from gcache import RedisConfig

gcache = GCache(
    GCacheConfig(
        redis_config=RedisConfig(
            host="redis.example.com",
            port=6379,
            password="secret",
        ),
    )
)

Custom Redis Factory

For dynamic credentials, token refresh, or connection pooling:

import threading
from redis.asyncio import Redis

def make_redis_factory():
    local = threading.local()

    def factory() -> Redis:
        if not hasattr(local, "client"):
            token = fetch_token_from_vault()
            local.client = Redis.from_url(f"redis://:{token}@redis:6379")
        return local.client

    return factory

gcache = GCache(
    GCacheConfig(
        redis_client_factory=make_redis_factory(),
    )
)

Important: Custom factories must use thread-local storage. Each thread needs its own client.

Invalidation

When data changes, you need to invalidate the cache. GCache makes this easy with targeted invalidation.

Basic Invalidation

# Mark the function for invalidation tracking
@gcache.cached(
    key_type="user_id",
    id_arg="user
View on GitHub
GitHub Stars37
CategoryDevelopment
Updated9d ago
Forks3

Languages

Python

Security Score

75/100

Audited on Mar 19, 2026

No findings