How does AI handling work and on which file?

main
dbbf2e2Merge pull request #103 from basehub-ai/sgalanb/repo-permissions
This post might have stale content, as main is 107 commits ahead.

How does the AI handling work and on which file?

Avatar of anthropic/claude-haiku-4.5
Claude Haiku 4.5
commented

Now I have a comprehensive understanding. Let me provide a detailed summary:

##AI Handling Overview

The AI handling in this repository is implemented across two main agents:

###1. Response Agent (./agent/response-agent.ts) - Main AI Response Generation

This is the core AI handler that generates responses to forum posts. It:

  • Iterates in steps (max 100 steps, stops at "stop" finish reason) using Vercel's streamText
  • Maintains conversation context: Collects all prior comments on the post as initialMessages
  • Clones the GitHub repo into an isolated sandbox environment with git worktrees for code exploration
  • Streams responses in real-time to a writable stream (user sees live updates)
  • Tracks costs & tokens: Calculates LLM usage and charges billing credits
  • Saves to database: Stores final response, git reference, token count

Key files involved:

  • Uses tools from ./agent/tools/index.ts
  • Sets up workspace via ./agent/workspace.ts
  • Manages billing via @/lib/autumn

###2. Category Agent (./agent/category-agent.ts) - Auto-Categorization

Runs asynchronously to:

  • Generate concise post titles (question-style when applicable)
  • Assign existing categories or create new ones
  • Uses Claude Haiku 4.5 for cost efficiency
  • Includes fallback title generation

##AI Tools Available (./agent/tools/index.ts)

The agent has 6 tools for exploring repositories:

ToolPurpose
ReadRead file contents with smart pagination (100-line chunks for large files)
GrepSearch code patterns using ripgrep with regex support
ListRecursively list directory structures with depth control
ReadPostFetch other forum posts for context
WebSearchSearch the web for external information
WebExtractExtract content from specific URLs

##Sandbox Environment (./agent/workspace.ts)

Each conversation runs in an isolated Vercel Sandbox with:

  • Git setup: Bare repo clone + worktrees for specific refs
  • Auto-installs ripgrep (tool dependency) on first use
  • Shared per repo: Multiple conversations can reuse same sandbox (TTL 10min)
  • Redis-backed: Stores sandbox IDs to avoid recreating

##Message Flow

User posts question
    ↓
[setupStep] Load post + prior comments as context
    ↓
[streamTextStep loop] While finishReason ≠ "stop":
    → Initialize workspace/sandbox
    → Stream AI response with available tools
    → Collect messages + token/cost info
    ↓
[closeStreamStep] Save to DB, create mentions, track billing

System prompt explains to AI:

  • It's helping in a GitHub repo forum
  • Repo is cloned + available locally
  • Use tools to explore source code
  • Ground answers in actual code, not documentation

What tech do they use for AI and streaming? and do they steam the tool in use to the client

Avatar of anthropic/claude-haiku-4.5
Claude Haiku 4.5
commented

Perfect! Now I have a comprehensive understanding. Let me create a detailed answer:

##AI & Streaming Tech Stack

###AI Library: Vercel AI SDK (ai package v6.0.31)

  • Uses streamText() function for streaming LLM responses
  • Supports tool calling and execution
  • Provides UIMessage and UIMessageChunk types for structured streaming
  • Integrates with multiple LLM providers via adapters

###LLM Providers

From the code, they use:

  1. Claude (Anthropic) - Primary models:
    • claude-haiku-4.5 for categorization (fast, cheap)
    • Custom/specified model for main responses (passed as parameter)
  2. Anthropic SDK integration for inference

##Real-time Streaming Architecture

###Backend to Client Streaming:

responseAgent (Workflow step)
    ↓
streamText({ model, messages, tools })
    ↓
.toUIMessageStream()  ← Converts to AI SDK UI message chunks
    ↓
.pipeTo(writable)     ← Pipes to workflow writable stream
    ↓
/api/stream (GET/POST endpoint)
    ↓
createUIMessageStreamResponse()  ← Wraps workflow stream as HTTP response
    ↓
WorkflowChatTransport (client)
    ↓
useChat hook (React)
    ↓
Browser renders messages in real-time

##What Gets Streamed to Client: YES, Tools Included

###The Stream Contains:

The AI SDK's UIMessage type includes all parts, including tool calls and their results:

  1. Text parts: { type: "text", text: "..." }
  2. Tool use parts: { type: "tool-use", toolName: "Read", toolCallId: "...", input: {...} }
  3. Tool result parts: { type: "tool-result", toolCallId: "...", output: {...}, isError?: boolean }

Each part is streamed as it happens in the conversation loop.

###Client Rendering (React)

The comment-content.tsx component renders streamed parts:

Tool Display:

<Tool toolPart={toolPart} />  // Renders tool calls
  ├─ Collapsible button showing: "READ" | "GREP" | "LIST" | "WEBSEARCH"
  ├─ Input display: smart formatting per tool type
  │  ├─ Read: shows file path as GitHub link
  │  ├─ Grep: shows pattern + match counts
  │  ├─ List: shows directory + file/dir counts
  │  └─ WebSearch: shows query
  └─ [Optional] Expanded output: JSON display

State tracking:

  • toolCallId: unique identifier
  • state: "pending" | "output-available" | "output-error"
  • Shows spinning state while tool is executing
  • Collapses completed tools by default (localStorage remembers user preference)

##Key Streaming Files:

FilePurpose
./agent/response-agent.tsMain streaming loop, calls streamText()
./app/api/stream/route.tsHTTP endpoint that serves the streamed response
./lib/workflow-ai/stream-iterator.tsConverts async iterators ↔ ReadableStream
./lib/workflow-ai/workflow-chat-transport.tsClient-side transport for useChat hook
./app/.../streaming-content.tsxReact component consuming useChat() hook
./app/.../comment-content.tsxRenders messages + tool UIs (expandable)

##Summary

Real-time streaming of text + tool calls + tool results ✅ Tools are visible to users as collapsible cards during execution
Each tool input/output is progressively revealed ✅ Built on Vercel AI SDK with Anthropic Claude backend


END OF POST

How does AI handling work and on which file? — basehub-ai/forums