Public Solution

Design WhatsApp

Design WhatsApp solution gives a production-minded baseline for this prompt. You get a concise requirements recap, a component-by-component architecture breakdown, explicit tradeoffs for latency, availability, cost, and complexity, plus failure mitigations and scoring rationale so you can benchmark your own design quickly.

HardWebsocketsDatabasesAuthReplication

Requirements Recap

RequirementTarget
Monthly active users2,000,000,000
Messages per day~100,000,000,000
Peak messages/second~5,000,000
Max group size1,024 members
Message delivery (online)< 500 ms
Offline message retention30 days
E2E encryptionMandatory
Availability target99.999%

Architecture Breakdown (Component-by-Component)

  1. 1. Mobile Clients

    Represents mobile user traffic and request patterns.

    Acts as an entry layer that routes traffic into the rest of the system.

  2. 2. DNS

    Resolves domain names to reachable service endpoints.

    Bridges 1 incoming flow to 1 downstream dependency.

  3. 3. Load Balancer

    Distributes requests across healthy backend instances.

    Bridges 1 incoming flow to 1 downstream dependency.

  4. 4. API Gateway

    Handles api gateway responsibilities in this design.

    Bridges 1 incoming flow to 2 downstream dependencies.

  5. 5. Auth Service

    Verifies identity, sessions, and authorization decisions.

    Acts as a sink or system-of-record endpoint in the architecture flow.

  6. 6. Core Service

    Handles microservice responsibilities in this design.

    Bridges 1 incoming flow to 2 downstream dependencies.

  7. 7. Realtime Bus

    Handles pub sub responsibilities in this design.

    Acts as a sink or system-of-record endpoint in the architecture flow.

  8. 8. Primary NoSQL DB

    Stores high-scale data with flexible schema and throughput.

    Bridges 1 incoming flow to 1 downstream dependency.

  9. 9. Replica SQL DB

    Persists relational data with transactional guarantees.

    Acts as a sink or system-of-record endpoint in the architecture flow.

Tradeoffs (Latency / Availability / Cost / Complexity)

DecisionLatencyAvailabilityCostComplexity
Keep the request path focused on core business operationsShorter synchronous path keeps average response time stableFewer inline dependencies reduce immediate failure blast radiusAvoids unnecessary infrastructure in the first rolloutLower coordination overhead for small teams
Keep a clear system of record for transactional writesPredictable read/write behavior with indexed accessStrong correctness with managed backup and recoveryStorage and IOPS spend grows with write volumeSchema evolution and query tuning required

Failure Modes and Mitigations

  • Failure mode: Primary datastore saturation increases latency and write timeouts

    Mitigation: Tune indexes, add read offload where valid, and cap expensive query classes.

Why This Scores Well

  • Availability (35%): Redundant routing and data paths reduce single points of failure under burst traffic.
  • Latency (20%): Critical-path components are intentionally minimal to keep average latency stable.
  • Resilience (25%): Clear role separation and bounded dependencies reduce cascading-failure risk.
  • Cost Efficiency (10%) + Simplicity (10%): Higher complexity is scoped to requirements that actually demand scale or stronger fault tolerance.

Next Step CTA

Validate this architecture by solving the prompt yourself, then practice the highest-leverage component in a guided lab and topic hub.

FAQ

  • What should I change first if traffic doubles?

    Profile the bottleneck first, then scale the hot path component (usually compute, cache, or read path) before adding new system layers.

  • Why is WebSockets emphasized in this solution?

    It is the highest-leverage topic for this challenge constraints and directly improves score-impacting metrics like latency, availability, or resilience.

  • How do I validate this architecture quickly?

    Run the same challenge in the simulator, compare score breakdown metrics, and then test one tradeoff change at a time.