What is the SDK structure and technology stack?
how is the SDK structured? what stack does it use?
Now I have a comprehensive picture of the SDK. Let me summarize:
##SDK Structure & Stack
###Technology Stack
| Layer | Technology |
|---|---|
| Runtime | Bun (v1.3.5) |
| Language | TypeScript |
| API Spec | OpenAPI 3.1.1 |
| SDK Generator | @hey-api/openapi-ts |
| HTTP Client | Generated fetch-based client |
| Package Manager | Bun workspaces (monorepo) |
| Build | TypeScript compiler (tsc) / Native TS (tsgo) |
###SDK Structure
packages/sdk/
├── openapi.json # OpenAPI specification (auto-generated from main app)
└── js/
├── package.json # @opencode-ai/sdk package
├── script/
│ ├── build.ts # Generates SDK from OpenAPI spec
│ └── publish.ts
├── example/
│ └── example.ts # Usage example
└── src/
├── index.ts # Main entry: createOpencode()
├── client.ts # createOpencodeClient() - HTTP client wrapper
├── server.ts # createOpencodeServer() - Spawns opencode server
├── gen/ # Auto-generated code from OpenAPI
│ ├── sdk.gen.ts # Main client class (OpencodeClient)
│ ├── types.gen.ts # TypeScript types
│ ├── client/ # HTTP client implementation
│ └── core/ # Utilities (auth, serializers, SSE, etc.)
└── v2/ # v2 API (similar structure)
###Key Components
-
createOpencodeServer(options)- Spawns theopencodeCLI in server mode (opencode serve) and returns the URL when ready. -
createOpencodeClient(config)- Creates an HTTP client that wraps the generated SDK with custom fetch handling. -
createOpencode(options)- Convenience function that creates both server and client together. -
createOpencodeTui(options)- Spawns the interactive TUI mode of opencode.
###Generated Client API Surface
The OpencodeClient class provides namespaced access to all API endpoints:
| Namespace | Purpose |
|---|---|
global | Health checks, global events |
project | Project listing and management |
session | Chat sessions, prompts, messages, forking, sharing |
config | Configuration management |
provider | LLM providers, OAuth |
tool | Tool management |
mcp | Model Context Protocol servers |
lsp | Language Server Protocol |
file | File operations |
find | Search (text, files, symbols) |
pty | Pseudo-terminal management |
tui | TUI control (toasts, commands, prompts) |
event | Server-sent events subscription |
###Build Process
The SDK is auto-generated from the OpenAPI spec:
// script/build.ts workflow:
1. Run `bun dev generate` to export openapi.json from main app
2. Use @hey-api/openapi-ts to generate TypeScript client
3. Format with Prettier
4. Compile with tsc
This ensures the SDK always stays in sync with the actual API implementation.
what functionality does it expose? i assume it wraps around the ai SDK? why does it use http? so maybe more directly: how do i implement an opencode server?
Now I have a complete picture. Let me summarize everything for you:
##How the SDK Works — The Full Picture
###Architecture Overview
┌──────────────────────────────────────────────────────────────────┐
│ SDK Client │
│ (@opencode-ai/sdk) │
└─────────────────────────┬────────────────────────────────────────┘
│ HTTP (localhost:4096)
▼
┌──────────────────────────────────────────────────────────────────┐
│ OpenCode Server │
│ (Hono HTTP Server) │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Routes: /session, /project, /config, /provider, etc. │ │
│ └────────────────────────────────────────────────────────────┘ │
└─────────────────────────┬────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ Session / LLM Layer │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ SessionPrompt.prompt() → LLM.stream() │ │
│ └────────────────────────────────────────────────────────────┘ │
└─────────────────────────┬────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ Vercel AI SDK │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ streamText() with provider-specific SDK │ │
│ │ (@ai-sdk/anthropic, @ai-sdk/openai, etc.) │ │
│ └────────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘
###Yes, it wraps the Vercel AI SDK
The core LLM interaction happens in packages/opencode/src/session/llm.ts:
import { streamText, wrapLanguageModel, extractReasoningMiddleware } from "ai"
export namespace LLM {
export async function stream(input: StreamInput) {
// ...
return streamText({
model: wrapLanguageModel({
model: language, // from Provider.getLanguage()
middleware: [extractReasoningMiddleware({ tagName: "think" })],
}),
tools,
messages,
system,
// ...
})
}
}
###Why HTTP?
The HTTP layer serves several purposes:
-
Process isolation — The server runs as a separate process (
opencode serve), allowing the TUI, web UI, IDE extensions, and SDK clients to all connect to the same running instance -
Multi-client support — Multiple clients can connect simultaneously (VS Code extension + CLI + web UI)
-
Event streaming — Server-Sent Events (SSE) for real-time updates across all connected clients
-
State management — Sessions, messages, and configuration are managed centrally
-
Tool execution — Tools like file editing, shell commands, etc. run in the server's process context with proper permissions
##How to Implement Your Own OpenCode Server
If you want to build something similar, here's the core architecture:
###Option 1: Use the SDK (Recommended)
The simplest path is to use the SDK which spawns the official server:
import { createOpencode } from "@opencode-ai/sdk"
const { client, server } = await createOpencode()
// Create a session
const session = await client.session.create()
// Send a prompt and get a response
const response = await client.session.prompt({
path: { id: session.data.id },
body: {
parts: [{ type: "text", text: "Write a hello world function" }]
}
})
// Subscribe to events
for await (const event of client.event.subscribe()) {
console.log(event)
}
// Cleanup
server.close()
###Option 2: Build Your Own Server
If you want to build your own implementation, here's the minimal stack:
import { Hono } from "hono"
import { streamText } from "ai"
import { createAnthropic } from "@ai-sdk/anthropic"
const app = new Hono()
const sessions = new Map()
// Session management
app.post("/session", async (c) => {
const id = crypto.randomUUID()
sessions.set(id, { messages: [], tools: {} })
return c.json({ id })
})
// Prompt endpoint
app.post("/session/:id/message", async (c) => {
const session = sessions.get(c.req.param("id"))
const { text } = await c.req.json()
// Add user message
session.messages.push({ role: "user", content: text })
// Stream LLM response
const result = await streamText({
model: createAnthropic()("claude-sonnet-4-20250514"),
messages: session.messages,
tools: session.tools,
system: "You are a coding assistant..."
})
// Collect and store assistant response
const response = await result.text
session.messages.push({ role: "assistant", content: response })
return c.json({ response })
})
export default app
###Key Components You'd Need to Implement
| Component | OpenCode Implementation | Purpose |
|---|---|---|
| Session storage | packages/opencode/src/session/index.ts | Persist conversation history |
| Message handling | packages/opencode/src/session/message-v2.ts | Structure messages with parts (text, files, tool calls) |
| Tool system | packages/opencode/src/tool/ | File editing, shell, search, etc. |
| Provider abstraction | packages/opencode/src/provider/provider.ts | Support multiple LLM providers |
| System prompts | packages/opencode/src/session/system.ts | Provider-specific system prompts |
| Event bus | packages/opencode/src/bus/ | Real-time event distribution |
| Permission system | packages/opencode/src/permission/ | Tool approval workflow |
###The Key Insight
The SDK is essentially a typed HTTP client for the OpenCode server. The real "magic" is in packages/opencode:
- The Vercel AI SDK (
streamText, provider SDKs) handles LLM communication - Hono handles HTTP routing
- The session/prompt system manages the agentic loop (tool calls → execution → continue)
If you want to understand the agentic loop, look at SessionPrompt.loop() in packages/opencode/src/session/prompt.ts — that's where it iterates until the model stops requesting tool calls.
in this SDK: the "client" isn't actually meant to be used in the browser, right? it'd be used, say, from my own API layer?
Yes, you're correct. The SDK is server-side only:
###Evidence:
-
server.tsimportsspawnfromnode:child_process— this is Node.js/Bun only, not available in browsers -
The client itself (
client.gen.ts) usesglobalThis.fetchwhich could work in a browser, but... -
The intended usage is to spawn the
opencodeCLI binary:const proc = spawn(`opencode`, args, { ... })This requires the
opencodebinary to be installed and accessible — obviously not possible in a browser. -
The default baseUrl is
http://localhost:4096— connecting to a local server
###Typical Usage Pattern:
┌─────────────────────┐
│ Your Frontend │ (browser)
│ (React, etc.) │
└──────────┬──────────┘
│ HTTP/WebSocket
▼
┌─────────────────────┐
│ Your API Layer │ (Node.js/Bun server)
│ (Next.js API, │
│ Hono, Express) │
│ │
│ ┌───────────────┐ │
│ │ @opencode-ai/ │ │ ← SDK lives here
│ │ sdk │ │
│ └───────┬───────┘ │
│ │ │
│ ▼ │
│ ┌───────────────┐ │
│ │ opencode │ │ ← spawned server process
│ │ server │ │
│ └───────────────┘ │
└─────────────────────┘
So yes — you'd use the SDK from your own backend/API layer, not directly from the browser. The browser would talk to your API, which then uses the SDK to interact with the opencode server.
how do i call/query/generate an LLM response
Now I have the full picture. Here's how to call/query/generate an LLM response:
##How to Generate an LLM Response
###Basic Pattern
import { createOpencodeClient, createOpencodeServer } from "@opencode-ai/sdk"
// 1. Start the server (spawns `opencode serve`)
const server = await createOpencodeServer()
const client = createOpencodeClient({ baseUrl: server.url })
// 2. Create a session
const session = await client.session.create()
// 3. Send a prompt (blocks until complete)
const response = await client.session.prompt({
path: { id: session.data.id },
body: {
parts: [
{ type: "text", text: "Write a hello world function in TypeScript" }
]
}
})
// response.data contains the final assistant message
console.log(response.data)
###Streaming Pattern (Real-time Updates)
The SDK uses Server-Sent Events (SSE) to stream updates. You subscribe to the event stream and filter for relevant events:
import { createOpencodeClient, createOpencodeServer } from "@opencode-ai/sdk"
const server = await createOpencodeServer()
const client = createOpencodeClient({ baseUrl: server.url })
// Subscribe to events BEFORE sending prompt
const events = await client.event.subscribe()
// Create session and send prompt (async - don't await)
const session = await client.session.create()
const sessionId = session.data.id
// Fire and forget - we'll get updates via SSE
client.session.promptAsync({
path: { id: sessionId },
body: {
parts: [{ type: "text", text: "Explain how async/await works" }]
}
})
// Stream events
for await (const event of events.stream) {
// Filter for this session's events
if (event.type === "message.part.updated") {
const { part, delta } = event.properties
if (part.sessionID === sessionId) {
if (part.type === "text" && delta) {
// Stream text chunks as they arrive
process.stdout.write(delta)
}
}
}
if (event.type === "session.idle") {
// Session finished processing
if (event.properties.sessionID === sessionId) {
break
}
}
}
###Key Event Types
| Event | Purpose |
|---|---|
message.part.updated | Streaming text chunks, tool calls, etc. |
message.updated | Full message metadata (tokens, cost, finish reason) |
session.idle | Session finished processing |
session.error | Error occurred |
permission.updated | Tool needs approval |
###Part Types in Responses
The part in message.part.updated can be:
| Type | Content |
|---|---|
text | LLM text output (with delta for streaming) |
reasoning | Chain-of-thought reasoning |
tool-invocation | Tool being called |
tool-result | Tool execution result |
file | File reference |
snapshot | Code state snapshot |
patch | File diff/patch |
###Specifying Model/Agent
await client.session.prompt({
path: { id: sessionId },
body: {
// Override model for this prompt
model: {
providerID: "anthropic",
modelID: "claude-sonnet-4-20250514"
},
// Use specific agent
agent: "code",
parts: [
{ type: "text", text: "Your prompt here" }
]
}
})
###Including Files
await client.session.prompt({
path: { id: sessionId },
body: {
parts: [
{
type: "file",
url: "file:///path/to/file.ts",
mime: "text/plain"
},
{
type: "text",
text: "Refactor this file to use async/await"
}
]
}
})
###Summary
client.session.prompt()— Blocks until the full response is readyclient.session.promptAsync()— Returns immediately, use SSE to streamclient.event.subscribe()— SSE stream of all events (filter by sessionID)
The "magic" is that opencode handles the full agentic loop internally — tool calls, file edits, retries, etc. You just send a prompt and receive the final result (or stream intermediate updates).
END OF POST