EasySocial Feed · Part 1

Social Feed 1 - MVP Launch

DatabasesAPI DesignCachingStorage

Problem Statement

Chirper is a new social media startup building a Twitter-like platform focused on short-form text posts (max 300 characters). For the MVP, the app needs:

Users can sign up, create a profile, and follow other users.Users can post short text messages ("chirps").A home timeline shows chirps from people you follow, sorted by newest first.Users can like and reply to chirps.Profile pages show a user's chirps and follower/following counts.

The founding team is targeting college campuses and expects 100,000 users in the first month, with about 20% being daily active.

What You'll Learn

Build a simple social feed app with posts, follows, and a chronological timeline. Build this architecture under realistic production constraints, then validate tradeoffs in the design lab simulation.

DatabasesAPI DesignCachingStorage

Constraints

Registered users100,000
Daily active users~20,000
Avg posts per user/day3
Avg follows per user50
Post size≤ 300 characters
Timeline load time< 500 ms
ApproachClick to expand

Interview-Ready Approach

1) Clarify Scope and SLOs

  • Problem statement: Build a simple social feed app with posts, follows, and a chronological timeline.
  • Design for a peak load target around 100 RPS (including burst headroom).
  • Registered users: 100,000
  • Daily active users: ~20,000
  • Avg posts per user/day: 3
  • Avg follows per user: 50
  • Post size: ≤ 300 characters

2) Capacity Planning Method

  • Convert traffic and growth constraints into request rate, storage growth, and concurrency budgets.
  • Keep at least 2-3x safety margin per tier (ingress, compute, storage, async workers).
  • Reserve explicit latency budgets per hop so p95 can be defended in review.

3) Architecture Decisions

  • Databases: Define a clear system-of-record and design read/write paths separately before adding optimizations.
  • API Design: Standardize API boundaries, idempotency keys, pagination, and error contracts first.
  • Caching: Put cache on hot read paths first and pick cache-aside or write-through explicitly.
  • Storage: Use object storage for large blobs and keep metadata/authorization separate in the API tier.

4) Reliability and Failure Strategy

  • Use strong write constraints (transactions or conditional writes) and explicit backup/restore strategy.
  • Apply strict input validation and backward-compatible versioning.
  • Bound staleness with TTL + invalidation hooks for critical entities.
  • Enforce lifecycle policies, retention tiers, and checksum validation.

5) Validation Plan

  • Run one peak-load test, one dependency-degradation test, and one failover test.
  • Verify idempotency for all retried writes and async consumers.
  • Track user-facing SLOs first: p95 latency, error rate, and successful throughput.

6) Trade-offs to Call Out in Interviews

  • Databases: SQL gives stronger transactional guarantees; NoSQL often gives better write scaling and flexibility.
  • API Design: Rich APIs improve developer speed but can create long-term compatibility burden.
  • Caching: Higher hit rate cuts latency/cost, but stale data and invalidation bugs become primary risks.
  • Storage: Object storage is cheap and durable, but random low-latency reads are weaker than databases/caches.

Practical Notes

  • A fan-out-on-read approach (pull model) works fine at low scale - query follows at read time.
  • Consider indexing the posts table by (author_id, created_at) for efficient profile page queries.
  • A simple cache in front of hot user profiles can reduce database load significantly.

Learn the Concept

Practice Next

Reference SolutionClick to reveal

Why This Solution Works

Request path: The solution keeps ingress, service logic, and stateful dependencies separated so each layer can scale independently.

Reference flow: Web Clients -> API Gateway -> API Service -> Primary NoSQL DB -> Redis Cache -> Object Storage

Design strengths

  • Cache sits on the read path to absorb repeated queries and keep DB pressure stable.

Interview defense

  • This design makes bottlenecks explicit (ingress, core compute, persistence, async workers).
  • It supports progressive scaling without re-architecting the core request path.
  • It keeps correctness-sensitive state changes in durable systems while offloading background work asynchronously.