Skip to content

Caching

Ushka provides a simple yet effective caching framework to help speed up your application by storing the results of expensive operations. The caching feature is pluggable and supports multiple backends.

Enabling and Configuring Caching

The caching system is disabled by default. To enable it, add a [cache] section to your ushka.toml file.

# in ushka.toml

[cache]
enabled = true
backend = "in_memory"

Backends

Ushka supports two caching backends out of the box:

  1. in_memory (default): This is a simple, process-local memory cache. It's very fast but the cache is lost whenever your application restarts. It's suitable for development and single-process production deployments.

  2. redis: This uses a Redis server as the cache backend. This is the recommended choice for production, especially if you are running multiple worker processes, as it provides a shared cache for all processes.

To use the redis backend, you must also provide a redis_url:

[cache]
enabled = true
backend = "redis"
redis_url = "redis://localhost:6379/0"

You will also need to install the redis library:

pip install redis

Using the Cache

The caching framework is exposed as a service that can be injected into your views or other services. The service provides a simple key-value store interface.

You should type-hint the ushka.contrib.caching.cache.BaseCache abstract class to get the configured cache instance.

Example

Here's how you can cache the results of a database query or a complex calculation.

# in a services.py or views.py file
from ushka.contrib.caching.cache import BaseCache

class ReportService:
    def __init__(self, cache: BaseCache):
        self.cache = cache

    async def generate_complex_report(self):
        # 1. Check if the report is already in the cache
        cached_report = await self.cache.get("complex_report")
        if cached_report:
            print("Returning report from CACHE")
            return cached_report

        # 2. If not in cache, perform the expensive operation
        print("Generating report from SCRATCH")
        # ... pretend this is a slow, expensive operation ...
        await asyncio.sleep(5)
        report_data = {"generated_at": str(datetime.now()), "data": [1, 2, 3]}

        # 3. Store the result in the cache for next time.
        #    Set a TTL (Time-To-Live) of 60 seconds.
        await self.cache.set("complex_report", report_data, ttl=60)

        return report_data

Available Methods

The BaseCache service provides the following async methods:

  • get(key: str): Retrieves an item from the cache. Returns None if the key is not found or has expired.
  • set(key: str, value: Any, ttl: Optional[int] = None): Stores an item in the cache. ttl is the time-to-live in seconds. If ttl is not provided, the item will not expire (unless the backend has a default eviction policy).
  • delete(key: str): Removes an item from the cache.
  • clear(): Clears all items from the cache.
  • close(): Closes any open connections to the cache backend.