Back to practice
AI for FrontendIntermediate50 minutes

Streaming AI Chat UI

Design a production streaming chat interface with token buffering, cancellation, markdown rendering, and error recovery.

LLM-friendly summary

An intermediate AI-for-frontend problem about building a production streaming chat UI with SSE, AbortController, token buffering, and robust error recovery.

Scenario

A B2B SaaS product is adding an AI assistant. Users will have multi-turn conversations, expect real-time streaming responses, and need to cancel or regenerate mid-stream without data loss.

What you need to design

  1. 1Design the message state machine — sending, streaming, complete, error, cancelled.
  2. 2Implement token buffering with requestAnimationFrame for smooth rendering.
  3. 3Handle cancellation via AbortController at every layer.
  4. 4Support markdown rendering mid-stream without layout thrashing.
  5. 5Plan error recovery, retry, and conversation persistence.

Concepts

SSEAbortControllerToken BufferingState MachinesMarkdown Streaming

Skills

Streaming ArchitectureReal-Time UXError RecoveryAccessibility

What good solutions are evaluated on

  • - Streaming transport and buffering design
  • - State machine completeness
  • - Cancellation and error recovery quality
  • - Scroll, rendering, and accessibility handling

Ready to practice this yourself?

Open the interactive AlgoReason workspace to sketch the architecture, write notes, and submit for AI evaluation.

Practice this problem