Cache-aside (lazy loading)
App checks cache first. On miss, reads from DB and writes to cache. The most common pattern — simple and safe.
def get_user(user_id):
cached = redis.get(f"user:{user_id}")
if cached: return json.loads(cached)
user = db.query("SELECT * FROM users WHERE id=%s", user_id)
redis.setex(f"user:{user_id}", 300, json.dumps(user))
return user
Write-through
On every write, update both DB and cache simultaneously. Cache is always warm. Cost: every write hits cache even for cold keys that may never be read.
TTL strategy
- Short TTL (30–300s) for frequently changing data where slight staleness is acceptable.
- Long TTL + explicit invalidation for stable data (user profiles, product catalogues).
- No TTL only for computed values with explicit cache busting on the underlying data change.
Cache stampede prevention
When a high-traffic key expires, many requests miss simultaneously and hammer the DB. Fix: probabilistic early expiry (recalculate slightly before expiry) or a distributed lock on the first miss so only one request rebuilds the cache.