The bug that made me understand the event loop
A team I worked with shipped a streaming chat feature. The model was fast, the network was fine, but the UI felt broken — typing lagged, the cursor stuttered, and on mobile the whole page froze for 200-400ms at a time.
The fix had nothing to do with the model or the network. It was a scheduling problem. Every token chunk triggered a synchronous DOM update, a microtask-based syntax highlighter, and a scroll calculation — all before the browser ever got a chance to paint.
That is the event loop in production: not a trivia question, but the reason your interface feels fast or broken.
The actual execution model
Most explanations give you "microtasks before macrotasks" and stop. Here is the full picture of what the browser does in a single iteration:
┌─────────────────────────────────────────────┐
│ 1. Pick one task from the task queue │
│ (setTimeout, click handler, fetch cb) │
│ │
│ 2. Execute it (the call stack runs) │
│ │
│ 3. Drain the entire microtask queue │
│ (Promise .then, queueMicrotask, │
│ MutationObserver) │
│ ⚠️ If a microtask enqueues another │
│ microtask, that runs too — before │
│ the browser can paint │
│ │
│ 4. If ~16ms have passed (or the browser │
│ decides it is time): │
│ a. Run requestAnimationFrame callbacks │
│ b. Calculate styles, layout, paint │
│ c. Composite and display the frame │
│ │
│ 5. If the browser is idle: │
│ Run requestIdleCallback work │
│ │
│ Loop back to 1. │
└─────────────────────────────────────────────┘
The critical insight: steps 1-3 are blocking. If your JavaScript in steps 1-3 takes 80ms, the user sees nothing for 80ms — no paint, no input response, nothing.
A concrete jank scenario
Here is code that looks reasonable but creates visible jank:
// ❌ This blocks paint on every chunk
eventSource.onmessage = (event) => {
const token = JSON.parse(event.data).token
// Synchronous DOM update
chatContainer.textContent += token
// This creates a microtask chain — all of it runs
// before the browser can paint
Promise.resolve()
.then(() => highlightSyntax(chatContainer))
.then(() => updateTokenCount())
.then(() => {
chatContainer.scrollTop = chatContainer.scrollHeight
})
}
Every SSE message triggers: task (onmessage) → synchronous DOM write → three microtasks that all flush before paint. If messages arrive every 10-30ms, the browser may not paint for hundreds of milliseconds.
Here is the fix:
// ✅ Buffer chunks, yield to the browser
let buffer = ''
let rafScheduled = false
eventSource.onmessage = (event) => {
buffer += JSON.parse(event.data).token
if (!rafScheduled) {
rafScheduled = true
requestAnimationFrame(() => {
// Flush buffered text in one DOM write
chatContainer.textContent += buffer
buffer = ''
rafScheduled = false
// Scroll only if user is near bottom
const distanceFromBottom =
chatContainer.scrollHeight -
chatContainer.scrollTop -
chatContainer.clientHeight
if (distanceFromBottom < 80) {
chatContainer.scrollTop = chatContainer.scrollHeight
}
})
}
}
// Expensive highlighting runs only when the browser is idle
const idleHighlight = () => {
requestIdleCallback(() => {
highlightSyntax(chatContainer)
})
}
// Trigger after streaming completes, not on every chunk
Why this works:
- Multiple SSE messages arrive between frames. The buffer accumulates them.
requestAnimationFramefires once per frame (~60fps), so we do one DOM write per 16ms instead of one per chunk.- Scroll is only forced when the user is actually at the bottom.
- Syntax highlighting is deferred to idle time — it does not need to be synchronous.
Measuring the difference
You can see this in Chrome DevTools Performance tab. Record a streaming interaction and look for:
What to look for in the flame chart:
Long tasks (>50ms)
├── If yellow (scripting): your JS is blocking
├── If purple (layout/style): your DOM writes are triggering expensive recalc
└── If green (paint): your repaints are too large or too frequent
Key metrics:
- Interaction to Next Paint (INP): should be < 200ms
- Total Blocking Time (TBT): sum of blocking time above 50ms per task
- Frame rate: should stay near 60fps during streaming
A common finding: the individual task is fast (5ms), but the microtask tail adds 40ms, and this happens 30 times per second. Total blocking time explodes even though no single operation looks expensive.
The microtask trap in detail
This is the subtlety most articles skip. Consider:
// This creates an infinite microtask loop — the browser NEVER paints
function floodMicrotasks() {
Promise.resolve().then(() => {
// do some work
floodMicrotasks() // enqueues another microtask
})
}
// The browser drains the microtask queue completely before moving
// to step 4 (rendering). If microtasks keep adding microtasks,
// rendering never happens.
You would never write this intentionally, but it happens accidentally in recursive Promise chains, especially when processing streamed data:
// ❌ Accidentally recursive microtask chain
async function processChunks(reader) {
const { done, value } = await reader.read() // microtask on resolve
if (done) return
updateDOM(value) // synchronous
await parseMarkdown() // microtask
await updateMetrics() // microtask
return processChunks(reader) // another microtask chain
}
// Each iteration adds to the microtask queue before paint
// ✅ Yield to the browser between chunks
async function processChunks(reader) {
const { done, value } = await reader.read()
if (done) return
updateDOM(value)
// Yield to the browser — setTimeout(0) creates a macrotask,
// so the browser can paint between chunks
await new Promise((resolve) => setTimeout(resolve, 0))
return processChunks(reader)
}
The setTimeout(0) trick works because it moves the next iteration to the task queue, which means the browser gets a chance to run steps 4-5 (render and idle callbacks) before picking up the next chunk.
When to use which scheduling API
┌───────────────────────┬──────────────────────────────────────┐
│ API │ When to use │
├───────────────────────┼──────────────────────────────────────┤
│ Synchronous │ Tiny state updates (<1ms) │
│ queueMicrotask() │ Must run before next render │
│ requestAnimationFrame │ Visual updates, DOM writes, scroll │
│ setTimeout(fn, 0) │ Yielding to browser between batches │
│ requestIdleCallback │ Non-urgent: analytics, prefetch, │
│ │ syntax highlighting │
│ Web Worker │ Heavy compute: parsing, embeddings, │
│ │ search indexing │
│ scheduler.postTask() │ Priority-based scheduling (newer API) │
└───────────────────────┴──────────────────────────────────────┘
The key principle: the more urgent the visual feedback, the closer to the render step it should be scheduled. The heavier the computation, the further away from the main thread it should run.
Real example: debounced search with proper scheduling
A search autocomplete has three competing concerns: respond to keystrokes immediately, debounce the API call, and render results without blocking typing.
function createSearch(renderResults) {
let controller = null
let debounceTimer = null
return (query) => {
// 1. Cancel previous in-flight request
controller?.abort()
clearTimeout(debounceTimer)
if (!query.trim()) {
renderResults([])
return
}
// 2. Debounce: wait 200ms of silence before fetching
debounceTimer = setTimeout(async () => {
controller = new AbortController()
try {
const res = await fetch(`/api/search?q=${query}`, {
signal: controller.signal,
})
const data = await res.json()
// 3. Render in the next animation frame to avoid
// layout thrashing between search results
requestAnimationFrame(() => renderResults(data.results))
} catch (err) {
if (err.name !== 'AbortError') {
requestAnimationFrame(() => renderResults([]))
}
}
}, 200)
}
}
Notice how each concern maps to a different scheduling mechanism:
- Cancellation happens synchronously (immediate, no delay)
- The API call is debounced with a macrotask (setTimeout)
- Rendering is deferred to the next animation frame
The Worker escape hatch
Some work simply does not belong on the main thread. If you are doing any of these in a streaming UI, move them to a Worker:
// main.ts
const worker = new Worker(
new URL('./search-worker.ts', globalThis._importMeta_.url),
{ type: 'module' }
)
worker.postMessage({ type: 'index', documents: allDocs })
worker.onmessage = (event) => {
if (event.data.type === 'results') {
requestAnimationFrame(() => {
renderSearchResults(event.data.results)
})
}
}
// search-worker.ts
import { buildIndex, search } from './search-engine'
let index = null
self.onmessage = (event) => {
if (event.data.type === 'index') {
// Heavy work: runs off main thread
index = buildIndex(event.data.documents)
}
if (event.data.type === 'query') {
const results = search(index, event.data.query)
self.postMessage({ type: 'results', results })
}
}
Use Workers for: search indexing, markdown parsing of large documents, image processing, local embedding inference, JSON processing of large payloads.
How to audit your own app
Open Chrome DevTools → Performance → record a user interaction (typing, scrolling during streaming, clicking between tabs). Look for:
- Long Tasks bar (red markers above the flame chart) — anything over 50ms
- Input delay — time between the physical click/keystroke and your handler running
- Microtask tails — the yellow blocks after your main function that represent .then() chains
- Layout thrashing — purple blocks (Recalculate Style / Layout) interleaved with DOM writes
A healthy streaming interaction looks like: short scripting blocks (5-15ms) separated by render opportunities. An unhealthy one looks like: one continuous yellow block of 100-300ms with no gaps.
The concepts that connect from here
The event loop is the foundation for understanding:
- Why debouncing and race conditions matter in search UX
- How streaming transports interact with rendering
- Why Promise chains can accidentally block the UI
LLM-friendly summary
An explanation of the JavaScript event loop that connects microtasks, rendering, and async queues to streaming AI interfaces and UI jank.