Public Solution
Cake Shop 2 - Scaling Up
Cake Shop 2 - Scaling Up solution gives a production-minded baseline for this prompt. You get a concise requirements recap, a component-by-component architecture breakdown, explicit tradeoffs for latency, availability, cost, and complexity, plus failure mitigations and scoring rationale so you can benchmark your own design quickly.
Requirements Recap
| Requirement | Target |
|---|---|
| Daily active users | ~500,000 |
| Peak concurrent users | ~25,000 |
| Read : Write ratio | 100 : 1 |
| Image catalog | ~2,000 high-res photos |
| Page load target | < 300 ms |
| Availability target | 99.9% |
Architecture Breakdown (Component-by-Component)
1. Web Browser
Generates user traffic and receives responses.
Acts as an entry layer that routes traffic into the rest of the system.
2. CDN
Serves cacheable and static content from edge locations.
Acts as a sink or system-of-record endpoint in the architecture flow.
3. Load Balancer
Distributes requests across healthy backend instances.
Bridges 1 incoming flow to 1 downstream dependency.
4. API Servers
Runs core business logic and orchestrates downstream calls.
Bridges 1 incoming flow to 2 downstream dependencies.
5. Redis Cache
Stores hot data to reduce origin read latency.
Acts as a sink or system-of-record endpoint in the architecture flow.
6. PostgreSQL
Persists relational data with transactional guarantees.
Acts as a sink or system-of-record endpoint in the architecture flow.
Tradeoffs (Latency / Availability / Cost / Complexity)
| Decision | Latency | Availability | Cost | Complexity |
|---|---|---|---|---|
| Keep the request path focused on core business operations | Shorter synchronous path keeps average response time stable | Fewer inline dependencies reduce immediate failure blast radius | Avoids unnecessary infrastructure in the first rollout | Lower coordination overhead for small teams |
| Keep a clear system of record for transactional writes | Predictable read/write behavior with indexed access | Strong correctness with managed backup and recovery | Storage and IOPS spend grows with write volume | Schema evolution and query tuning required |
| Cache hot reads in front of the primary data store | Lower median and tail latency on repeated reads | Absorbs origin pressure during read spikes | Adds cache infra spend but reduces database scaling pressure | Requires TTL and invalidation discipline |
| Distribute traffic across multiple app instances | Stable p95 by reducing overloaded nodes | Removes single-instance failure risk | Higher compute footprint than single-server design | Needs health checks and rollout-aware routing |
Failure Modes and Mitigations
Failure mode: Primary datastore saturation increases latency and write timeouts
Mitigation: Tune indexes, add read offload where valid, and cap expensive query classes.
Failure mode: Cache stampede after hot-key expiry overloads the database
Mitigation: Use request coalescing, jittered TTLs, and stale-while-revalidate for hot keys.
Failure mode: Unhealthy instances continue receiving traffic during partial failure
Mitigation: Use active health checks, low-fail thresholds, and connection draining on rollout.
Why This Scores Well
- Availability (35%): Redundant routing and data paths reduce single points of failure under burst traffic.
- Latency (20%): The design keeps hot reads close to users and reduces expensive origin round-trips.
- Resilience (25%): Clear role separation and bounded dependencies reduce cascading-failure risk.
- Cost Efficiency (10%) + Simplicity (10%): The architecture stays right-sized for the stated constraints, avoiding premature infra sprawl.
Next Step CTA
Validate this architecture by solving the prompt yourself, then practice the highest-leverage component in a guided lab and topic hub.
FAQ
What should I change first if traffic doubles?
Profile the bottleneck first, then scale the hot path component (usually compute, cache, or read path) before adding new system layers.
Why is Databases emphasized in this solution?
It is the highest-leverage topic for this challenge constraints and directly improves score-impacting metrics like latency, availability, or resilience.
How do I validate this architecture quickly?
Run the same challenge in the simulator, compare score breakdown metrics, and then test one tradeoff change at a time.
Related Reading
Back-of-the-Envelope Estimation for System Design Interviews
A step-by-step framework for capacity estimation: QPS, storage, bandwidth, and memory calculations that interviewers actually expect.
Database Scaling Strategies: Replication, Sharding, and Partitioning
A practical guide to scaling databases in system design: when to replicate, when to shard, and how partitioning strategies affect your architecture.