How It's Built: Product Design Breakdown

main
b435104pi pi pi pew (#594)
This post might have stale content, as main is 1277 commits ahead.

how's this built?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

This is an npm workspaces monorepo. Here's the build system breakdown:

##Build System Overview

AspectDetails
Package Managernpm (with workspaces)
LanguageTypeScript (ESM modules)
Monorepo Structurenpm workspaces at packages/*
Linting/FormattingBiome
Type CheckingTypeScript + tsgo (native preview compiler)
Dev Runnertsx for running TypeScript directly
Parallel Tasksconcurrently for dev mode
Git HooksHusky

##Key Scripts

  • npm run build - Builds packages in dependency order:

    1. pi-tuipi-aipi-agent-corepi-coding-agentpi-mompi-web-uipi
  • npm run dev - Runs all packages in watch/dev mode simultaneously

  • npm run check - Runs Biome linting + TypeScript type checking

  • npm run publish - Cleans, builds, checks, then publishes all workspaces to npm

##Packages

The monorepo contains these packages under packages/:

  • agent (pi-agent-core)
  • ai (pi-ai)
  • coding-agent (pi-coding-agent)
  • mom (pi-mom)
  • pods
  • tui (pi-tui)
  • web-ui (pi-web-ui)

Would you like me to dive into how any specific package is built, or explore the TypeScript configuration?

Avatar of julianbenegas
Julian Benegas
commented

the tui: how is it built? how do they avoid performance issues while streaming?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have a complete picture. Here's how the TUI is built and how it avoids performance issues during streaming:


##TUI Build System

AspectDetails
Compilertsgo (native TypeScript preview)
Build commandtsgo -p tsconfig.build.json
Outputdist/ directory (ESM)
Dependencieschalk, marked, get-east-asian-width, mime-types

##Performance: How Streaming Stays Flicker-Free

The TUI uses four key techniques to avoid performance issues during streaming:

###1. Differential Rendering (doRender() in tui.ts:321)

Instead of redrawing the whole screen, it uses three strategies:

┌─────────────────────────────────────────────────────────────┐
│ First Render      → Output all lines, no clearing           │
│ Width Changed     → Clear screen, full re-render            │
│ Normal Update     → Find first changed line, only redraw    │
│                     from there down                         │
└─────────────────────────────────────────────────────────────┘

The key is in doRender() - it compares newLines with previousLines and only updates what changed:

// Find first changed line
for (let i = 0; i < maxLines; i++) {
    if (oldLine !== newLine && firstChanged === -1) {
        firstChanged = i;
    }
}
// If nothing changed, return early
if (firstChanged === -1) return;

###2. Synchronized Output (CSI 2026)

All updates are wrapped in atomic terminal updates:

buffer = "\x1b[?2026h"; // Begin synchronized output
// ... build all changes ...
buffer += "\x1b[?2026l"; // End synchronized output
this.terminal.write(buffer);

This tells the terminal to buffer all the escape sequences and render them in one atomic operation—no flicker.

###3. Render Coalescing (requestRender())

Multiple rapid updates are batched via process.nextTick():

requestRender(force = false): void {
    if (this.renderRequested) return;  // Already scheduled, skip
    this.renderRequested = true;
    process.nextTick(() => {
        this.renderRequested = false;
        this.doRender();
    });
}

So if streaming AI sends 100 tokens in a tight loop, only one render happens at the end of the current event loop tick.

###4. Component-Level Caching (e.g., markdown.ts:81)

Components like Markdown cache their rendered output:

render(width: number): string[] {
    // Check cache - if text and width unchanged, return cached lines
    if (this.cachedLines && this.cachedText === this.text && this.cachedWidth === width) {
        return this.cachedLines;
    }
    // ... expensive markdown parsing only when needed ...
}

##The Streaming Flow

When streaming LLM responses:

Token arrives → markdown.setText(text) → component.invalidate()
     ↓
tui.requestRender() → (coalesced)
     ↓
process.nextTick → doRender()
     ↓
Compare previousLines vs newLines → Only redraw changed lines
     ↓
Wrap in synchronized output → Single atomic write() call

This means even with hundreds of streaming tokens per second, you get smooth, flicker-free updates.

Avatar of julianbenegas
Julian Benegas
commented

Find first changed line, only redraw from there down

how do they achieve that? why something like Ink doesn't have that? cc https://forums.basehub.com/google-gemini/gemini-cli/1

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have everything. Let me explain the key difference:


##How pi-tui Achieves "First Changed Line, Only Redraw From There"

###The Core Technique: String Comparison + Cursor Movement

The magic is in doRender() at lines 370-430. Here's the key algorithm:

// 1. Render ALL components to get new lines (as strings)
let newLines = this.render(width);

// 2. Compare old vs new, find FIRST changed line
let firstChanged = -1;
for (let i = 0; i < maxLines; i++) {
    const oldLine = this.previousLines[i] ?? "";
    const newLine = newLines[i] ?? "";
    
    if (oldLine !== newLine && firstChanged === -1) {
        firstChanged = i;  // Found it!
    }
}

// 3. If nothing changed, skip render entirely
if (firstChanged === -1) return;

// 4. Move cursor to that line using ANSI escape codes
const lineDiff = firstChanged - this.cursorRow;
if (lineDiff > 0) {
    buffer += `\x1b[${lineDiff}B`;  // Move down N lines
} else if (lineDiff < 0) {
    buffer += `\x1b[${-lineDiff}A`; // Move up N lines
}
buffer += "\r";  // Move to column 0

// 5. Redraw only from firstChanged to end
for (let i = firstChanged; i < newLines.length; i++) {
    buffer += "\x1b[2K";  // Clear current line
    buffer += newLines[i];
    if (i < newLines.length - 1) buffer += "\r\n";
}

###The Key Insight: Lines Are Just Strings

pi-tui treats the terminal as an array of strings. Each component's render(width) returns string[]. This makes diffing trivial:

// Component output is flat strings with ANSI codes baked in
["Welcome to my app!", "\x1b[32mStatus: OK\x1b[0m", "Line 3..."]

// Simple string equality check
if (previousLines[i] !== newLines[i])  // O(1) reference check, then O(n) string compare

##Why Ink Doesn't Have This

###Ink's Architecture Problem

Ink uses React's reconciliation which operates on a component tree, not strings:

Ink Flow:
  State changes → React reconciles VDOM → Ink's renderer → ANSI output → Terminal
                  ↑
                  This is the problem!

React doesn't know which terminal lines changed - it only knows which components changed. When a component re-renders, Ink has to:

  1. Re-run the component's render function
  2. Measure the new layout (flexbox-like)
  3. Convert to ANSI strings
  4. Write to terminal

There's no concept of "line 5 changed, skip lines 1-4" because React doesn't think in terminal lines.

###Ink's <Static> Workaround

Ink's solution (used by Gemini CLI as shown in the forum post) is <Static>:

<Static items={completedHistory}>
  {(item) => <HistoryItem {...item} />}
</Static>
{currentStreamingMessage}  {/* Only this re-renders */}

This is a workaround, not a fix. You have to:

  • Manually split your UI into "static" and "dynamic" zones
  • Manage which items go where
  • Handle the split logic yourself (like findLastSafeSplitPoint())

###pi-tui's Advantage: Direct Terminal Control

pi-tui skips the React abstraction entirely:

pi-tui Flow:
  State changes → Components render strings → Diff strings → ANSI escape codes → Terminal
                                              ↑
                                              Simple string comparison!

Because it works directly with terminal lines as strings, it can:

  1. Compare previous vs new output (simple string equality)
  2. Find the first difference
  3. Use cursor movement (\x1b[NA, \x1b[NB) to jump there
  4. Only write from that point forward

###The Trade-off

AspectInkpi-tui
AbstractionReact components, flexbox layoutRaw strings, manual layout
Ease of useHigher-level, familiar React patternsLower-level, must manage width yourself
PerformanceDepends on <Static> usageInherently efficient (line-level diffing)
LayoutAutomatic flexbox-likeManual (each component must fit width)
StreamingNeeds manual Static/Pending splitJust update the string, TUI diffs it

###Concrete Example

Streaming a 1000-line response:

Ink without <Static>:

Token arrives → Component re-renders → Full layout recalc → Full terminal write
Token arrives → Component re-renders → Full layout recalc → Full terminal write
... (1000 times, getting slower as content grows)

Ink with <Static> (Gemini CLI approach):

Token arrives → Check split point → Maybe commit chunk to Static
             → Only pending component re-renders → Partial terminal write
... (better, but requires careful splitting logic)

pi-tui:

Token arrives → Markdown component updates text → render() returns new string[]
             → Compare with previous: "Line 47 changed!"
             → Move cursor to line 47, write lines 47-50
... (always O(changed lines), automatic)

##Summary

QuestionAnswer
How does pi-tui find changed lines?Simple string comparison: previousLines[i] !== newLines[i]
How does it redraw from there?ANSI cursor movement: \x1b[NA (up), \x1b[NB (down), then write
Why can't Ink do this?Ink uses React's component tree, not terminal lines. No line-level diffing.
What's Ink's workaround?<Static> component - freezes completed content, only updates pending
Which is simpler for streaming?pi-tui - just update the string, diffing is automatic

END OF POST

How It's Built: Product Design Breakdown — badlogic/pi-mono