How does open code handle system prompts?

dev
d51089bdocs(web): add KDCO plugins to ecosystem (#7694)
This post might have stale content, as dev is 1389 commits ahead.

How does open code handle system prompts? What instructions are appended to the user's prompt?

Avatar of anthropic/claude-haiku-4.5
Claude Haiku 4.5
commented

Perfect! Now I have a comprehensive understanding. Let me create a detailed summary for the user.

##Summary: How OpenCode Handles System Prompts

OpenCode assembles system prompts through a layered approach, combining multiple sources of instructions. Here's the breakdown:

###System Prompt Assembly (in packages/opencode/src/session/llm.ts:68-81)

The system prompt is built in the following order:

  1. Header - Provider-specific prefixes:

    • For Anthropic providers: A short spoof message ("You are Claude Code, Anthropic's official CLI for Claude")
    • For other providers: Empty by default
  2. Agent or Provider Instructions - Core behavioral guidelines:

    • If the agent has a custom prompt field, that takes precedence
    • Otherwise, provider-specific prompts are used:
      • OpenAI (GPT-5, GPT-*, o1, o3): beast.txt - Emphasizes autonomous problem-solving, extensive research, step-by-step thinking, iterative verification
      • Claude: anthropic.txt - Focuses on task management, web fetching, detailed planning with todos
      • Gemini: gemini.txt - Emphasizes core mandates, conventions, security, and verification workflows
      • Other models (Qwen, etc.): qwen.txt - Concise, direct instructions with security warnings
  3. Custom Prompts from Call - Ad-hoc instructions passed into the LLM stream call

    • Added via the system parameter to LLM.stream()
  4. User Message Prompts - Per-message custom instructions:

    • Added via input.user.system field
  5. Environment Context (in packages/opencode/src/session/prompt.ts:599):

    • Working directory
    • Git repository status
    • Platform info
    • Current date
  6. Custom Instructions (in packages/opencode/src/session/system.ts:92-137):

    • Loaded from local files (AGENTS.md, CLAUDE.md, CONTEXT.md)
    • Loaded from global config (~/.config/opencode/AGENTS.md or OPENCODE_CONFIG_DIR)
    • Fetched from URLs specified in config

###Special Cases

  • Codex (GitHub Copilot) Sessions: Use options.instructions = SystemPrompt.instructions() (from codex_header.txt) instead of the system message, because Codex uses a different interface
  • Plugin Hooks: System prompts can be modified via experimental.chat.system.transform plugin hook before being sent to the model

###Key Prompt Files

FileUsed ForKey Characteristics
codex_header.txtGeneral OpenCode instructionsEditing constraints, tool usage, git hygiene, frontend design guidelines
beast.txtOpenAI modelsAutonomous problem-solving, extensive iteration, webfetch requirements, detailed planning
anthropic.txtClaude modelsTask management with todos, web research, professional objectivity
gemini.txtGoogle GeminiCore mandates (conventions, libraries), security rules, verification workflows
qwen.txtQwen & other modelsConcise, security-focused, malware prevention, minimal output
anthropic_spoof.txtAnthropic spoofingMinimal header identifying as "Claude Code"

The prompts are intelligently composed to maintain consistency while allowing model-specific optimizations and user customization at multiple levels.


END OF POST

How does open code handle system prompts? — anomalyco/opencode