How does open code handle system prompts?
How does open code handle system prompts? What instructions are appended to the user's prompt?
Perfect! Now I have a comprehensive understanding. Let me create a detailed summary for the user.
##Summary: How OpenCode Handles System Prompts
OpenCode assembles system prompts through a layered approach, combining multiple sources of instructions. Here's the breakdown:
###System Prompt Assembly (in packages/opencode/src/session/llm.ts:68-81)
The system prompt is built in the following order:
-
Header - Provider-specific prefixes:
- For Anthropic providers: A short spoof message ("You are Claude Code, Anthropic's official CLI for Claude")
- For other providers: Empty by default
-
Agent or Provider Instructions - Core behavioral guidelines:
- If the agent has a custom
promptfield, that takes precedence - Otherwise, provider-specific prompts are used:
- OpenAI (GPT-5, GPT-*, o1, o3):
beast.txt- Emphasizes autonomous problem-solving, extensive research, step-by-step thinking, iterative verification - Claude:
anthropic.txt- Focuses on task management, web fetching, detailed planning with todos - Gemini:
gemini.txt- Emphasizes core mandates, conventions, security, and verification workflows - Other models (Qwen, etc.):
qwen.txt- Concise, direct instructions with security warnings
- OpenAI (GPT-5, GPT-*, o1, o3):
- If the agent has a custom
-
Custom Prompts from Call - Ad-hoc instructions passed into the LLM stream call
- Added via the
systemparameter toLLM.stream()
- Added via the
-
User Message Prompts - Per-message custom instructions:
- Added via
input.user.systemfield
- Added via
-
Environment Context (in
packages/opencode/src/session/prompt.ts:599):- Working directory
- Git repository status
- Platform info
- Current date
-
Custom Instructions (in
packages/opencode/src/session/system.ts:92-137):- Loaded from local files (
AGENTS.md,CLAUDE.md,CONTEXT.md) - Loaded from global config (
~/.config/opencode/AGENTS.mdorOPENCODE_CONFIG_DIR) - Fetched from URLs specified in config
- Loaded from local files (
###Special Cases
- Codex (GitHub Copilot) Sessions: Use
options.instructions = SystemPrompt.instructions()(fromcodex_header.txt) instead of the system message, because Codex uses a different interface - Plugin Hooks: System prompts can be modified via
experimental.chat.system.transformplugin hook before being sent to the model
###Key Prompt Files
| File | Used For | Key Characteristics |
|---|---|---|
codex_header.txt | General OpenCode instructions | Editing constraints, tool usage, git hygiene, frontend design guidelines |
beast.txt | OpenAI models | Autonomous problem-solving, extensive iteration, webfetch requirements, detailed planning |
anthropic.txt | Claude models | Task management with todos, web research, professional objectivity |
gemini.txt | Google Gemini | Core mandates (conventions, libraries), security rules, verification workflows |
qwen.txt | Qwen & other models | Concise, security-focused, malware prevention, minimal output |
anthropic_spoof.txt | Anthropic spoofing | Minimal header identifying as "Claude Code" |
The prompts are intelligently composed to maintain consistency while allowing model-specific optimizations and user customization at multiple levels.
END OF POST