Guided Lab Brief

Async Processing with Message Queues

Decouple your services with a message queue and process tasks asynchronously with workers.

Overview

Decouple your services with a message queue and process tasks asynchronously with workers.

Not everything needs an immediate response.

You will build 5 architecture steps that model production dependencies.

You will run 2 failure experiments to observe bottlenecks and recovery behavior.

Success target: API response time <50ms, queue doesn't grow unbounded, workers keep up with incoming tasks.

Learning Objectives

  • Understand producer-consumer pattern with message queues
  • Know the difference between at-most-once, at-least-once, exactly-once delivery
  • Learned about dead letter queues for failed message handling
  • Experienced backpressure when consumers can't keep up

Experiments

  1. Change delivery guarantee to at_most_once to see what happens with lossy messaging
  2. Reduce worker instances to 1 and increase processing time to 5 seconds to create a backlog

Failure Modes to Trigger

  • Trigger: Change delivery guarantee to at_most_once to see what happens with lossy messaging

    Observe: At-most-once delivery means messages can be lost if a worker crashes mid-processing. The user's photo might never be processed, and they'll never know. Some uploads just... disappear.

  • Trigger: Reduce worker instances to 1 and increase processing time to 5 seconds to create a backlog

    Observe: 1 worker × 10 concurrency = 10 tasks at once, each taking 2 seconds. At 200 tasks/sec incoming but only processing ~5/sec, the queue grows endlessly. Processing backlog balloons - users wait hours for their photos.