How much can I customize LLM calls in createDeepAgent()?
how much can i customize from the underlying llm calls from the createDeepAgent() ? meaning, stuff like maxSteps (per turn), temperature, etc?
Now I have a thorough understanding. Here's the full answer:
##Customization of underlying LLM calls in createDeepAgent()
There are two layers where you can customize things: at agent creation time (via createDeepAgent() params) and at invocation time (via .invoke() / .stream() config). Here's the breakdown:
###1. temperature, topP, maxTokens, etc. — via the model param
createDeepAgent() does not expose these as top-level parameters. Instead, you control them by passing a pre-configured model instance instead of a string:
// ❌ No temperature param on createDeepAgent
createDeepAgent({ model: "claude-sonnet-4-5-20250929", temperature: 0 });
// ✅ Pass a model instance with your config
import { ChatAnthropic } from "@langchain/anthropic";
const agent = createDeepAgent({
model: new ChatAnthropic({
model: "claude-sonnet-4-20250514",
temperature: 0,
// maxTokens, topP, topK, etc. — whatever the provider supports
}),
});
This is the pattern used in every example in the repo (examples/research/research-agent.ts:200-204, examples/backends/*.ts, examples/sandbox/*.ts, etc.). When you pass a string like "claude-sonnet-4-5-20250929" (the default), you get the provider's defaults with no customization.
###2. maxSteps / step limits per turn
There is no maxSteps parameter anywhere in deepagentsjs or the underlying createAgent(). The agent loop is controlled by LangGraph's recursionLimit, which you set at invocation time:
// Default: createDeepAgent() hardcodes recursionLimit: 10,000
// (see agent.ts line 307: `.withConfig({ recursionLimit: 10_000 })`)
// Override at invocation time:
await agent.invoke(
{ messages: [new HumanMessage("...")] },
{ recursionLimit: 50 } // override the default 10k
);
From agent.ts:307:
const agent = createAgent({
model,
systemPrompt: finalSystemPrompt,
tools: tools as StructuredTool[],
// ...
}).withConfig({ recursionLimit: 10_000 });
The full set of options you can pass at invocation time (from the underlying LangGraph PregelOptions) includes:
| Option | Description |
|---|---|
recursionLimit | Max graph steps (default 10,000 set by deepagentsjs) |
maxConcurrency | Max parallel operations |
timeout | Execution timeout |
signal | AbortSignal for cancellation |
configurable | Arbitrary config (e.g., thread_id) |
callbacks | LangChain callback handlers |
durability | Persistence durability setting |
store | Override the store |
cache | Cache configuration |
###3. What createDeepAgent() exposes directly
Looking at CreateDeepAgentParams (lines 301-389):
| Param | What it controls |
|---|---|
model | LLM model — string or instance (this is where you set temperature, etc.) |
tools | Tools available to the agent |
systemPrompt | System prompt (string or SystemMessage) |
middleware | Custom middleware array |
subagents | Sub-agent definitions |
responseFormat | Structured output schema |
contextSchema | Per-invocation context schema |
checkpointer | State persistence |
store | Long-term memory store |
backend | Filesystem backend |
interruptOn | Human-in-the-loop config |
name | Agent name |
memory | Memory file paths (AGENTS.md) |
skills | Skill source paths |
###4. Things that are hardcoded and not customizable
Several built-in middleware configs are hardcoded inside createDeepAgent() and cannot be overridden:
- Summarization trigger:
170,000 tokens, keeping6 messages(agent.ts:269-272) - Prompt caching: Anthropic caching enabled with
unsupportedModelBehavior: "ignore" - Recursion limit:
10,000(though overridable at invocation time) - Built-in middleware stack: todoList, filesystem, subagents, summarization, prompt caching, tool call patching — always included
###TL;DR
- Temperature/model params: Pass a model instance (
new ChatAnthropic({ temperature: 0 })) instead of a string - Max steps per turn: Use
recursionLimitat invocation time (default is 10k) - No
maxSteps: This concept doesn't exist in the codebase;recursionLimitis the equivalent - Summarization/caching configs: Hardcoded, not exposed for customization through
createDeepAgent()
END OF POST