Redis

An open-source, in-memory data structure store used as a database, cache, message broker, and streaming engine, leveraging a single-threaded event loop for extreme performance.

Cheat Sheet

Prime Use Case

When sub-millisecond latency is required for high-frequency read/write operations on structured data that fits within RAM.

Critical Tradeoffs

  • Memory cost vs. Disk cost
  • Single-threaded simplicity vs. Multi-core utilization
  • Consistency vs. Availability during partitions (CAP theorem)
  • Persistence overhead vs. Data durability

Killer Senior Insight

Redis is not just a 'key-value' store; it is a 'Data Structure Server.' Its power lies in performing atomic, server-side operations on complex types (Sets, Hashes, Sorted Sets) without the round-trip overhead of 'read-modify-write' cycles.

Recognition

Common Interview Phrases

Need for sub-millisecond response times
Requirement for atomic counters or rate limiting
Real-time leaderboards or ranking systems
Distributed locking or synchronization across microservices
High-throughput session management

Common Scenarios

  • Caching frequently accessed database queries
  • Implementing a sliding-window rate limiter
  • Managing real-time pub/sub for chat applications
  • Storing geospatial data for proximity searches
  • Deduplication in stream processing

Anti-patterns to Avoid

  • Using Redis as a primary relational database for complex joins
  • Storing multi-gigabyte large objects (LOBs) in a single key
  • Using it for cold data storage where disk-based DBs are more cost-effective

The Problem

The Fundamental Issue

The 'Disk I/O Wall' and the overhead of traditional RDBMS locking mechanisms which limit throughput and increase latency for high-velocity data.

What breaks without it

Database saturation under high read/write loads

Increased application latency due to disk seeks

Race conditions in distributed systems without atomic primitives

Inability to scale real-time features like live counters

Why alternatives fail

Memcached lacks complex data structures and persistence options

RDBMS locking (Pessimistic/Optimistic) is too slow for microsecond-scale operations

Local application memory doesn't scale across multiple server instances

Mental Model

The Intuition

Imagine a master chef (the single-threaded event loop) who is incredibly fast. Instead of having many slow chefs bumping into each other in a small kitchen (locking/contention), this one chef handles every order one-by-one from a perfectly organized counter (RAM). Because the chef never has to leave the kitchen to get ingredients from the basement (Disk), they can serve thousands of customers per second.

Key Mechanics

1

Non-blocking I/O multiplexing (epoll/kqueue)

2

Single-threaded execution of commands to avoid context switching and locks

3

Asynchronous persistence (RDB snapshots and AOF logs)

4

In-memory hash table for O(1) lookups

5

Redis Sentinel for high availability and Redis Cluster for horizontal sharding

Framework

When it's the best choice

  • When the dataset fits in RAM
  • When you need atomic operations on lists or sets
  • When low-latency is the primary non-functional requirement

When to avoid

  • When data integrity requires strict ACID compliance across multiple keys (though Lua scripts help)
  • When the dataset is significantly larger than available RAM
  • When you need complex ad-hoc querying capabilities

Fast Heuristics

If you need simple strings only: Memcached might suffice, but Redis is usually preferred for its feature set.
If you need persistence and complex types: Redis is the clear winner over Memcached.
If you need multi-master writes: Consider DynamoDB or Cassandra instead of standard Redis.

Tradeoffs

+

Strengths

  • Extremely high throughput (100k+ ops/sec per core)
  • Rich set of data structures (Bitmaps, HyperLogLogs, Streams)
  • Atomic operations reduce application-side complexity
  • Simple to operate and highly mature ecosystem

Weaknesses

  • Data loss risk during failover (asynchronous replication)
  • Memory is expensive compared to SSD/HDD storage
  • Single-threaded nature means one 'heavy' command (like KEYS *) can block the entire server
  • Cold starts can be slow if loading a massive RDB file into memory

Alternatives

Memcached
Alternative

When it wins

For very simple key-value caching where memory memory management (LRU) needs to be extremely aggressive and multi-threading is preferred.

Key Difference

Multi-threaded, but lacks data structures and persistence.

Aerospike
Alternative

When it wins

When you need Redis-like performance but the dataset is in the multi-terabyte range (Hybrid Memory Architecture).

Key Difference

Optimized for Flash/SSD storage, not just RAM.

Amazon DynamoDB (with DAX)
Alternative

When it wins

When operating in AWS and requiring a fully managed, serverless caching layer with seamless integration.

Key Difference

Write-through caching tightly coupled with a NoSQL database.

KeyDB
Alternative

When it wins

When you need a multi-threaded version of Redis to fully utilize multi-core CPUs on a single instance.

Key Difference

A multi-threaded fork of Redis.

Execution

Must-hit talking points

  • Explain the Event Loop and why single-threading avoids lock contention.
  • Discuss Eviction Policies (LRU, LFU, Volatile-TTL) and why they matter for cache stability.
  • Mention Pipelineing to reduce RTT (Round Trip Time).
  • Distinguish between RDB (point-in-time snapshots) and AOF (append-only logs) persistence.
  • Explain Redis Cluster's hash slot mechanism for sharding.

Anticipate follow-ups

  • Q:How do you handle 'Hot Keys' in a Redis Cluster?
  • Q:What happens to the system during a 'Cache Stampede' or 'Thundering Herd'?
  • Q:How does Redis replication work (Asynchronous vs. Wait command)?
  • Q:How would you implement a distributed lock using Redlock?

Red Flags

Running O(N) commands like 'KEYS *' or 'HGETALL' on large datasets in production.

Why it fails: Since Redis is single-threaded, these commands block all other requests, causing a total system hang until the command completes.

Ignoring the 'Maxmemory' setting and eviction policies.

Why it fails: The server may crash with an Out-of-Memory (OOM) error or start swapping to disk, which destroys performance.

Assuming Redis is a 100% durable database by default.

Why it fails: Default persistence settings often favor performance over durability; a crash can lead to several seconds of data loss.