What is the SDK structure and technology stack?

devv1.1.17
29bf731release: v1.1.17
This post might have stale content, as dev is 2014 commits ahead.

how is the SDK structured? what stack does it use?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have a comprehensive picture of the SDK. Let me summarize:


##SDK Structure & Stack

###Technology Stack

LayerTechnology
RuntimeBun (v1.3.5)
LanguageTypeScript
API SpecOpenAPI 3.1.1
SDK Generator@hey-api/openapi-ts
HTTP ClientGenerated fetch-based client
Package ManagerBun workspaces (monorepo)
BuildTypeScript compiler (tsc) / Native TS (tsgo)

###SDK Structure

packages/sdk/
├── openapi.json          # OpenAPI specification (auto-generated from main app)
└── js/
    ├── package.json      # @opencode-ai/sdk package
    ├── script/
    │   ├── build.ts      # Generates SDK from OpenAPI spec
    │   └── publish.ts
    ├── example/
    │   └── example.ts    # Usage example
    └── src/
        ├── index.ts      # Main entry: createOpencode()
        ├── client.ts     # createOpencodeClient() - HTTP client wrapper
        ├── server.ts     # createOpencodeServer() - Spawns opencode server
        ├── gen/          # Auto-generated code from OpenAPI
        │   ├── sdk.gen.ts      # Main client class (OpencodeClient)
        │   ├── types.gen.ts    # TypeScript types
        │   ├── client/         # HTTP client implementation
        │   └── core/           # Utilities (auth, serializers, SSE, etc.)
        └── v2/           # v2 API (similar structure)

###Key Components

  1. createOpencodeServer(options) - Spawns the opencode CLI in server mode (opencode serve) and returns the URL when ready.

  2. createOpencodeClient(config) - Creates an HTTP client that wraps the generated SDK with custom fetch handling.

  3. createOpencode(options) - Convenience function that creates both server and client together.

  4. createOpencodeTui(options) - Spawns the interactive TUI mode of opencode.


###Generated Client API Surface

The OpencodeClient class provides namespaced access to all API endpoints:

NamespacePurpose
globalHealth checks, global events
projectProject listing and management
sessionChat sessions, prompts, messages, forking, sharing
configConfiguration management
providerLLM providers, OAuth
toolTool management
mcpModel Context Protocol servers
lspLanguage Server Protocol
fileFile operations
findSearch (text, files, symbols)
ptyPseudo-terminal management
tuiTUI control (toasts, commands, prompts)
eventServer-sent events subscription

###Build Process

The SDK is auto-generated from the OpenAPI spec:

// script/build.ts workflow:
1. Run `bun dev generate` to export openapi.json from main app
2. Use @hey-api/openapi-ts to generate TypeScript client
3. Format with Prettier
4. Compile with tsc

This ensures the SDK always stays in sync with the actual API implementation.

Avatar of julianbenegas
Julian Benegas
commented

what functionality does it expose? i assume it wraps around the ai SDK? why does it use http? so maybe more directly: how do i implement an opencode server?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have a complete picture. Let me summarize everything for you:


##How the SDK Works — The Full Picture

###Architecture Overview

┌──────────────────────────────────────────────────────────────────┐
│                         SDK Client                                │
│                   (@opencode-ai/sdk)                              │
└─────────────────────────┬────────────────────────────────────────┘
                          │ HTTP (localhost:4096)
                          ▼
┌──────────────────────────────────────────────────────────────────┐
│                    OpenCode Server                                │
│                  (Hono HTTP Server)                               │
│  ┌────────────────────────────────────────────────────────────┐  │
│  │  Routes: /session, /project, /config, /provider, etc.      │  │
│  └────────────────────────────────────────────────────────────┘  │
└─────────────────────────┬────────────────────────────────────────┘
                          │
                          ▼
┌──────────────────────────────────────────────────────────────────┐
│                   Session / LLM Layer                             │
│  ┌────────────────────────────────────────────────────────────┐  │
│  │  SessionPrompt.prompt() → LLM.stream()                     │  │
│  └────────────────────────────────────────────────────────────┘  │
└─────────────────────────┬────────────────────────────────────────┘
                          │
                          ▼
┌──────────────────────────────────────────────────────────────────┐
│                    Vercel AI SDK                                  │
│  ┌────────────────────────────────────────────────────────────┐  │
│  │  streamText() with provider-specific SDK                   │  │
│  │  (@ai-sdk/anthropic, @ai-sdk/openai, etc.)                 │  │
│  └────────────────────────────────────────────────────────────┘  │
└──────────────────────────────────────────────────────────────────┘

###Yes, it wraps the Vercel AI SDK

The core LLM interaction happens in packages/opencode/src/session/llm.ts:

import { streamText, wrapLanguageModel, extractReasoningMiddleware } from "ai"

export namespace LLM {
  export async function stream(input: StreamInput) {
    // ...
    return streamText({
      model: wrapLanguageModel({
        model: language,  // from Provider.getLanguage()
        middleware: [extractReasoningMiddleware({ tagName: "think" })],
      }),
      tools,
      messages,
      system,
      // ...
    })
  }
}

###Why HTTP?

The HTTP layer serves several purposes:

  1. Process isolation — The server runs as a separate process (opencode serve), allowing the TUI, web UI, IDE extensions, and SDK clients to all connect to the same running instance

  2. Multi-client support — Multiple clients can connect simultaneously (VS Code extension + CLI + web UI)

  3. Event streaming — Server-Sent Events (SSE) for real-time updates across all connected clients

  4. State management — Sessions, messages, and configuration are managed centrally

  5. Tool execution — Tools like file editing, shell commands, etc. run in the server's process context with proper permissions


##How to Implement Your Own OpenCode Server

If you want to build something similar, here's the core architecture:

The simplest path is to use the SDK which spawns the official server:

import { createOpencode } from "@opencode-ai/sdk"

const { client, server } = await createOpencode()

// Create a session
const session = await client.session.create()

// Send a prompt and get a response
const response = await client.session.prompt({
  path: { id: session.data.id },
  body: {
    parts: [{ type: "text", text: "Write a hello world function" }]
  }
})

// Subscribe to events
for await (const event of client.event.subscribe()) {
  console.log(event)
}

// Cleanup
server.close()

###Option 2: Build Your Own Server

If you want to build your own implementation, here's the minimal stack:

import { Hono } from "hono"
import { streamText } from "ai"
import { createAnthropic } from "@ai-sdk/anthropic"

const app = new Hono()
const sessions = new Map()

// Session management
app.post("/session", async (c) => {
  const id = crypto.randomUUID()
  sessions.set(id, { messages: [], tools: {} })
  return c.json({ id })
})

// Prompt endpoint
app.post("/session/:id/message", async (c) => {
  const session = sessions.get(c.req.param("id"))
  const { text } = await c.req.json()
  
  // Add user message
  session.messages.push({ role: "user", content: text })
  
  // Stream LLM response
  const result = await streamText({
    model: createAnthropic()("claude-sonnet-4-20250514"),
    messages: session.messages,
    tools: session.tools,
    system: "You are a coding assistant..."
  })
  
  // Collect and store assistant response
  const response = await result.text
  session.messages.push({ role: "assistant", content: response })
  
  return c.json({ response })
})

export default app

###Key Components You'd Need to Implement

ComponentOpenCode ImplementationPurpose
Session storagepackages/opencode/src/session/index.tsPersist conversation history
Message handlingpackages/opencode/src/session/message-v2.tsStructure messages with parts (text, files, tool calls)
Tool systempackages/opencode/src/tool/File editing, shell, search, etc.
Provider abstractionpackages/opencode/src/provider/provider.tsSupport multiple LLM providers
System promptspackages/opencode/src/session/system.tsProvider-specific system prompts
Event buspackages/opencode/src/bus/Real-time event distribution
Permission systempackages/opencode/src/permission/Tool approval workflow

###The Key Insight

The SDK is essentially a typed HTTP client for the OpenCode server. The real "magic" is in packages/opencode:

  • The Vercel AI SDK (streamText, provider SDKs) handles LLM communication
  • Hono handles HTTP routing
  • The session/prompt system manages the agentic loop (tool calls → execution → continue)

If you want to understand the agentic loop, look at SessionPrompt.loop() in packages/opencode/src/session/prompt.ts — that's where it iterates until the model stops requesting tool calls.

Avatar of julianbenegas
Julian Benegas
commented

in this SDK: the "client" isn't actually meant to be used in the browser, right? it'd be used, say, from my own API layer?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Yes, you're correct. The SDK is server-side only:

###Evidence:

  1. server.ts imports spawn from node:child_process — this is Node.js/Bun only, not available in browsers

  2. The client itself (client.gen.ts) uses globalThis.fetch which could work in a browser, but...

  3. The intended usage is to spawn the opencode CLI binary:

    const proc = spawn(`opencode`, args, { ... })
    

    This requires the opencode binary to be installed and accessible — obviously not possible in a browser.

  4. The default baseUrl is http://localhost:4096 — connecting to a local server

###Typical Usage Pattern:

┌─────────────────────┐
│   Your Frontend     │  (browser)
│   (React, etc.)     │
└──────────┬──────────┘
           │ HTTP/WebSocket
           ▼
┌─────────────────────┐
│   Your API Layer    │  (Node.js/Bun server)
│   (Next.js API,     │
│    Hono, Express)   │
│                     │
│  ┌───────────────┐  │
│  │ @opencode-ai/ │  │  ← SDK lives here
│  │     sdk       │  │
│  └───────┬───────┘  │
│          │          │
│          ▼          │
│  ┌───────────────┐  │
│  │   opencode    │  │  ← spawned server process
│  │    server     │  │
│  └───────────────┘  │
└─────────────────────┘

So yes — you'd use the SDK from your own backend/API layer, not directly from the browser. The browser would talk to your API, which then uses the SDK to interact with the opencode server.

Avatar of julianbenegas
Julian Benegas
commented

how do i call/query/generate an LLM response

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have the full picture. Here's how to call/query/generate an LLM response:


##How to Generate an LLM Response

###Basic Pattern

import { createOpencodeClient, createOpencodeServer } from "@opencode-ai/sdk"

// 1. Start the server (spawns `opencode serve`)
const server = await createOpencodeServer()
const client = createOpencodeClient({ baseUrl: server.url })

// 2. Create a session
const session = await client.session.create()

// 3. Send a prompt (blocks until complete)
const response = await client.session.prompt({
  path: { id: session.data.id },
  body: {
    parts: [
      { type: "text", text: "Write a hello world function in TypeScript" }
    ]
  }
})

// response.data contains the final assistant message
console.log(response.data)

###Streaming Pattern (Real-time Updates)

The SDK uses Server-Sent Events (SSE) to stream updates. You subscribe to the event stream and filter for relevant events:

import { createOpencodeClient, createOpencodeServer } from "@opencode-ai/sdk"

const server = await createOpencodeServer()
const client = createOpencodeClient({ baseUrl: server.url })

// Subscribe to events BEFORE sending prompt
const events = await client.event.subscribe()

// Create session and send prompt (async - don't await)
const session = await client.session.create()
const sessionId = session.data.id

// Fire and forget - we'll get updates via SSE
client.session.promptAsync({
  path: { id: sessionId },
  body: {
    parts: [{ type: "text", text: "Explain how async/await works" }]
  }
})

// Stream events
for await (const event of events.stream) {
  // Filter for this session's events
  if (event.type === "message.part.updated") {
    const { part, delta } = event.properties
    if (part.sessionID === sessionId) {
      if (part.type === "text" && delta) {
        // Stream text chunks as they arrive
        process.stdout.write(delta)
      }
    }
  }
  
  if (event.type === "session.idle") {
    // Session finished processing
    if (event.properties.sessionID === sessionId) {
      break
    }
  }
}

###Key Event Types

EventPurpose
message.part.updatedStreaming text chunks, tool calls, etc.
message.updatedFull message metadata (tokens, cost, finish reason)
session.idleSession finished processing
session.errorError occurred
permission.updatedTool needs approval

###Part Types in Responses

The part in message.part.updated can be:

TypeContent
textLLM text output (with delta for streaming)
reasoningChain-of-thought reasoning
tool-invocationTool being called
tool-resultTool execution result
fileFile reference
snapshotCode state snapshot
patchFile diff/patch

###Specifying Model/Agent

await client.session.prompt({
  path: { id: sessionId },
  body: {
    // Override model for this prompt
    model: {
      providerID: "anthropic",
      modelID: "claude-sonnet-4-20250514"
    },
    // Use specific agent
    agent: "code",
    parts: [
      { type: "text", text: "Your prompt here" }
    ]
  }
})

###Including Files

await client.session.prompt({
  path: { id: sessionId },
  body: {
    parts: [
      {
        type: "file",
        url: "file:///path/to/file.ts",
        mime: "text/plain"
      },
      {
        type: "text", 
        text: "Refactor this file to use async/await"
      }
    ]
  }
})

###Summary

  • client.session.prompt() — Blocks until the full response is ready
  • client.session.promptAsync() — Returns immediately, use SSE to stream
  • client.event.subscribe() — SSE stream of all events (filter by sessionID)

The "magic" is that opencode handles the full agentic loop internally — tool calls, file edits, retries, etc. You just send a prompt and receive the final result (or stream intermediate updates).


END OF POST

What is the SDK structure and technology stack? — anomalyco/opencode