Use Redis for short-lived application caching
- Status: proposed
- Deciders: <team or role>
- Technical story: <issue link> · Date: <YYYY-MM-DD>
Context and Problem Statement
Repeated reads hammer the database on hot API paths. We need acceleration without erasing the boundary between fast path and system of record.
Decision Drivers
- Latency and load on primary storage
- Correctness when cache misses or evicts
- Operations already know Redis in this environment
Considered Options
- No cache — simplest; still slow and expensive at peak
- In-process only — uneven under horizontal scale
- Redis TTL cache — shared, measurable, not authoritative
Decision Outcome
Adopt Redis for read-through, TTL-bounded cache entries for approved routes; database remains source of truth.
Consequences
- Fewer DB reads on hot paths; explicit invalidation and key design become review topics
- Redis unavailability or slowness must degrade safely (spike load on DB otherwise)
Pros and Cons of the Options (summary)
Redis fits existing ops muscle; tradeoff is cache correctness and extra dependency.
Confirmation
Cache owners + SRE sign-off in review before enabling new cached routes in prod.