Skip to content

System Design

Master System Design with 50 free flashcards. Study using spaced repetition and focus mode for effective learning in Programming.

🎓 50 cards ⏱️ ~25 min Advanced
Study Full Deck →
Share: 𝕏 Twitter LinkedIn WhatsApp

🎯 What You'll Learn

Preview Questions

12 shown

What is horizontal scaling?

Show ▼

Horizontal scaling (or scaling out) means adding more machines to your resource pool to handle increased load. For example, going from 1 server to 10 servers behind a load balancer.

Advantages:Near-infinite scalabilityBetter fault tolerance — one server failing doesn't take down the systemCan use commodity hardwareDisadvantage: Adds complexity in data consistency and distributed coordination.

What is vertical scaling?

Show ▼

Vertical scaling (or scaling up) means adding more power (CPU, RAM, disk) to an existing machine.

Advantages:Simpler architecture — no distributed coordination neededNo code changes requiredDisadvantages:Hardware limits — there's a ceiling to how much you can upgradeSingle point of failureDowntime during upgrades

What is a load balancer?

Show ▼

A load balancer distributes incoming network traffic across multiple backend servers to ensure no single server is overwhelmed.

It sits between clients and servers, improving:Availability — routes around failed serversThroughput — parallel processing across serversLatency — directs to least-loaded serverCommon examples: NGINX, HAProxy, AWS ALB/NLB.

What are common load balancing algorithms?

Show ▼

Common algorithms:Round Robin — requests are distributed sequentially across serversWeighted Round Robin — servers with higher capacity get more requestsLeast Connections — routes to the server with fewest active connectionsIP Hash — routes based on client IP, ensuring session stickinessRandom — randomly selects a serverLeast Response Time — routes to the fastest-responding server

What are Layer 4 vs Layer 7 load balancers?

Show ▼

Layer 4 (Transport) load balancers route based on IP address and TCP/UDP port without inspecting packet contents. They are faster but less flexible.

Layer 7 (Application) load balancers inspect HTTP headers, cookies, and URLs to make smarter routing decisions. They can do content-based routing, SSL termination, and request modification.

Example: HAProxy supports both; AWS NLB is L4, ALB is L7.

How do load balancer health checks work?

Show ▼

Health checks are periodic probes sent by the load balancer to backend servers to verify they are operational.

Types:Active — LB sends HTTP/TCP requests at intervals (e.g., GET /health every 10s)Passive — LB monitors real traffic for errors or timeoutsConfiguration: Typically includes interval, timeout, unhealthy threshold (failures before removal), and healthy threshold (successes before re-adding).

What is Redis?

Show ▼

Redis (Remote Dictionary Server) is an open-source, in-memory key-value data store used as a cache, message broker, and database.

Key features:Sub-millisecond latencyRich data structures: strings, hashes, lists, sets, sorted sets, streamsPersistence options: RDB snapshots and AOF (Append Only File)Built-in replication, Lua scripting, pub/subCluster mode for horizontal scaling

What is Memcached and how does it differ from Redis?

Show ▼

Memcached is a distributed in-memory caching system designed for simplicity and speed.

Differences from Redis:Memcached: only simple key-value strings; Redis: rich data structuresMemcached: no persistence; Redis: optional persistenceMemcached: multi-threaded; Redis: single-threaded (with I/O threads in v6+)Memcached: no replication; Redis: built-in replicationMemcached: better for simple caching; Redis: better for complex use cases

What is CDN caching?

Show ▼

A CDN (Content Delivery Network) caches static and dynamic content at edge servers geographically distributed around the world.

How it works:User requests content → routed to nearest edge serverIf cached (cache hit), served directly with low latencyIf not cached (cache miss), fetched from origin, cached, then servedBenefits: reduced latency, lower origin server load, DDoS protection.
Examples: CloudFlare, AWS CloudFront, Akamai.

What are cache invalidation strategies?

Show ▼

Cache invalidation ensures stale data is removed or updated. Main strategies:TTL (Time-To-Live) — cache entries expire after a set durationEvent-based — invalidate on write/update eventsWrite-through — data is written to cache and DB simultaneouslyWrite-behind — data is written to cache first, DB updated asynchronouslyManual purge — explicitly delete specific cache keys"There are only two hard things in CS: cache invalidation and naming things."

What is the cache-aside pattern?

Show ▼

In the cache-aside (lazy-loading) pattern, the application manages the cache directly:

Read path:Check cache for dataOn cache miss, read from databaseStore result in cache, return to callerWrite path:Write to databaseInvalidate (delete) the cache entryPros: Only requested data is cached; resilient to cache failures.
Cons: Cache miss penalty (extra DB call); possible stale data between write and invalidation.

What is write-through caching?

Show ▼

In write-through caching, every write goes to both the cache and the database synchronously.

App → write to Cache → write to DB

Pros:Cache is always consistent with DBNo stale readsCons:Higher write latency (two writes per operation)Cache may hold data that is never readOften combined with TTL to evict unused entries.

🎓 Start studying System Design

🎮 Study Modes Available

🔄

Flashcards

Flip to reveal

🧠

Focus Mode

Spaced repetition

Multiple Choice

Test your knowledge

⌨️

Type Answer

Active recall

📚

Learn Mode

Multi-round mastery

🎯

Match Game

Memory challenge

Related Topics in Programming

📖 Learning Resources