MediumRideShare · Part 2

RideShare 2 - Multi-City & Surge Pricing

DatabasesCachingMessage QueuesAnalyticsSharding

This challenge builds on RideShare 1 - City Launch. Complete it first for the best experience.

Problem Statement

ZipRide has proven product-market fit and is expanding to 20 cities. The engineering challenges multiply:

- Surge pricing - dynamically adjust prices based on supply/demand ratio per geographic zone, recalculated every 60 seconds.Data growth - ride history, driver trajectories, and payment records are growing fast. The single database is becoming a bottleneck.Analytics pipeline - the business team needs real-time dashboards showing rides/hour, revenue, and driver utilization per city.Peak handling - Friday/Saturday nights see 5× average load.

Design a system that scales horizontally across cities while keeping the matching fast and pricing accurate.

What You'll Learn

Expand to 20 cities, add surge pricing, and handle 1 M rides/day. Build this architecture under realistic production constraints, then validate tradeoffs in the design lab simulation.

DatabasesCachingMessage QueuesAnalyticsSharding

Constraints

Cities20
Daily rides~1,000,000
Surge recalculationEvery 60 seconds per zone
Active drivers (concurrent)~60,000
Peak multiplier5× average
Analytics latency< 5 minutes
Availability target99.95%
ApproachClick to expand

How to Approach

Clarify requirements: Expanding to 20 cities. Peak demand at commute hours. Driver dispatch is becoming slow -- too many drivers to scan.

Estimate scale: 1M rides/day across 20 cities. ~60k concurrent drivers. Peak at 5x average on Fri/Sat nights.

Pick components:

  • Add a load balancer for horizontal API scaling
  • Message queue for async trip dispatch (decouple matching from API response)
  • Separate matching service that handles geospatial queries
  • Add monitoring to track dispatch latency

Key tradeoffs to discuss:

  • Matching is CPU-intensive (geospatial search across thousands of drivers) -- isolate it
  • Async dispatch: respond to rider immediately (looking for driver), match in background
  • City-based data partitioning: drivers and trips in NYC do not interact with LA
  • Fan-out: one trip request -> query all available drivers in radius -> rank by ETA -> assign

Learn the Concept

Practice Next

Reference SolutionClick to reveal

Scale the API horizontally with a load balancer. Extract matching into a dedicated service -- geospatial queries are CPU-heavy and need independent scaling. Decouple dispatch via message queue so the API responds immediately while matching happens asynchronously. Monitoring tracks dispatch latency -- the key SLA metric for rideshare.