EasyCake Shop · Part 2

Cake Shop 2 - Scaling Up

DatabasesCachingLoad BalancingCDN

This challenge builds on Cake Shop 1 - Going Online. Complete it first for the best experience.

Problem Statement

Sweet Crumbs Bakery went viral after a celebrity posted about their red velvet cake. Traffic has grown 50× overnight - from 10 k to 500,000 daily visitors. The single-server setup from Challenge 1 is buckling under load.

Maya has secured funding and hired two more developers. She needs:The catalog to remain fast even under heavy read traffic.Orders to be reliably accepted without data loss, even during surges.Static assets (high-res cake photos) to load quickly for users everywhere.

Redesign the architecture to handle this new scale while keeping costs reasonable.

What You'll Learn

The bakery goes viral on social media. Handle 500 k daily visitors with caching and load balancing. Build this architecture under realistic production constraints, then validate tradeoffs in the design lab simulation.

DatabasesCachingLoad BalancingCDN

Constraints

Daily active users~500,000
Peak concurrent users~25,000
Read : Write ratio100 : 1
Image catalog~2,000 high-res photos
Page load target< 300 ms
Availability target99.9%
ApproachClick to expand

How to Approach

Clarify requirements: Growing from 10k to 100k daily visitors. Same feature set but now peak traffic (3x) overwhelms a single server.

Estimate scale: 100k DAU / 86,400s ~= 1.2 RPS average. 3x peak ~= 3.5 RPS. A cache and a second API server are warranted.

Pick components:

  • Add Redis cache in front of PostgreSQL for product catalog reads (80%+ of traffic)
  • Add a load balancer to distribute across 2-3 API servers
  • Keep single PostgreSQL -- at this scale it handles writes fine
  • CDN still handles static assets

Key tradeoffs to discuss:

  • Cache invalidation: when a product price changes, how does the cache update? (TTL vs explicit invalidation)
  • Session storage: with multiple API servers, sessions must be stored in Redis (not in-memory)
  • Connection pooling: each API instance holds DB connections -- 3 instances x 20 connections = 60 total
  • Reads dominate writes (browsing >> purchasing), so a read cache has high impact

Learn the Concept

Practice Next

Reference SolutionClick to reveal

At 100k DAU, add a load balancer to distribute across multiple API servers and a Redis cache to absorb product catalog reads. The load balancer also enables zero-downtime deploys. Redis must store sessions so any API instance can serve any user. CDN continues to handle static assets. PostgreSQL remains the single source of truth.