Can I send system role messages mid-conversation?
can i send a "system" role messages in the middle of a conversation using the ai sdk? does that make sense?
Now I have a complete picture. Let me summarize the findings for you:
##Summary: Can you send "system" role messages in the middle of a conversation?
Short answer: The AI SDK allows it at the type level, but it's not recommended and may cause issues with some providers.
###Here's the detailed breakdown:
####1. Type-level: System messages ARE allowed in the messages array
Looking at packages/ai/src/prompt/message.ts and packages/provider-utils/src/types/model-message.ts, the ModelMessage type is defined as:
export type ModelMessage =
| SystemModelMessage
| UserModelMessage
| AssistantModelMessage
| ToolModelMessage;
So you can include a message with role: 'system' anywhere in your messages array.
####2. However, there's a strong warning in the source code
From packages/provider-utils/src/types/system-model-message.ts:
/**
* A system message. It can contain system information.
*
* Note: using the "system" part of the prompt is strongly preferred
* to increase the resilience against prompt injection attacks,
* and because not all providers support several system messages.
*/
####3. Different providers handle this differently
-
OpenAI: Simply passes through system messages wherever they appear (
packages/openai/src/chat/convert-to-openai-chat-messages.ts) - the OpenAI API does allow system messages at various positions. -
Anthropic: Throws an error if you have multiple system messages separated by user/assistant messages (
packages/anthropic/src/convert-to-anthropic-messages-prompt.ts):if (system != null) { throw new UnsupportedFunctionalityError({ functionality: 'Multiple system messages that are separated by user/assistant messages', }); } -
Amazon Bedrock: Similar restrictions - combines leading system messages but throws for separated ones (
packages/amazon-bedrock/src/convert-to-bedrock-chat-messages.ts) -
Google (Gemma): Combines system messages into the first user message for compatibility
####4. The recommended approach: Use the system parameter
The AI SDK provides a separate system parameter in generateText(), streamText(), etc.:
const result = await generateText({
model: openai('gpt-4'),
system: 'You are a helpful assistant.', // ← Use this instead
messages: [
{ role: 'user', content: 'Hello!' },
{ role: 'assistant', content: 'Hi there!' },
{ role: 'user', content: 'How are you?' },
],
});
This ensures the system message is always at the start and is handled correctly by all providers.
###Does it make sense semantically?
Some use cases where mid-conversation system messages might make sense:
- Dynamically injecting context or instructions as the conversation evolves
- Simulating "internal thoughts" or changing agent behavior mid-conversation
However, not all LLM APIs support this pattern, which is why the AI SDK recommends using the dedicated system parameter for maximum compatibility.
END OF POST