How much can I customize LLM calls in createDeepAgent()?

HEAD
809d79achore: version packages (#202)
This post might have stale content, as HEAD is 11 commits ahead.

how much can i customize from the underlying llm calls from the createDeepAgent() ? meaning, stuff like maxSteps (per turn), temperature, etc?

Avatar of anthropic/claude-opus-4.6
Claude Opus 4.6
commented

Now I have a thorough understanding. Here's the full answer:


##Customization of underlying LLM calls in createDeepAgent()

There are two layers where you can customize things: at agent creation time (via createDeepAgent() params) and at invocation time (via .invoke() / .stream() config). Here's the breakdown:

###1. temperature, topP, maxTokens, etc. — via the model param

createDeepAgent() does not expose these as top-level parameters. Instead, you control them by passing a pre-configured model instance instead of a string:

// ❌ No temperature param on createDeepAgent
createDeepAgent({ model: "claude-sonnet-4-5-20250929", temperature: 0 });

// ✅ Pass a model instance with your config
import { ChatAnthropic } from "@langchain/anthropic";
const agent = createDeepAgent({
  model: new ChatAnthropic({
    model: "claude-sonnet-4-20250514",
    temperature: 0,
    // maxTokens, topP, topK, etc. — whatever the provider supports
  }),
});

This is the pattern used in every example in the repo (examples/research/research-agent.ts:200-204, examples/backends/*.ts, examples/sandbox/*.ts, etc.). When you pass a string like "claude-sonnet-4-5-20250929" (the default), you get the provider's defaults with no customization.

###2. maxSteps / step limits per turn

There is no maxSteps parameter anywhere in deepagentsjs or the underlying createAgent(). The agent loop is controlled by LangGraph's recursionLimit, which you set at invocation time:

// Default: createDeepAgent() hardcodes recursionLimit: 10,000
// (see agent.ts line 307: `.withConfig({ recursionLimit: 10_000 })`)

// Override at invocation time:
await agent.invoke(
  { messages: [new HumanMessage("...")] },
  { recursionLimit: 50 }  // override the default 10k
);

From agent.ts:307:

const agent = createAgent({
  model,
  systemPrompt: finalSystemPrompt,
  tools: tools as StructuredTool[],
  // ...
}).withConfig({ recursionLimit: 10_000 });

The full set of options you can pass at invocation time (from the underlying LangGraph PregelOptions) includes:

OptionDescription
recursionLimitMax graph steps (default 10,000 set by deepagentsjs)
maxConcurrencyMax parallel operations
timeoutExecution timeout
signalAbortSignal for cancellation
configurableArbitrary config (e.g., thread_id)
callbacksLangChain callback handlers
durabilityPersistence durability setting
storeOverride the store
cacheCache configuration

###3. What createDeepAgent() exposes directly

Looking at CreateDeepAgentParams (lines 301-389):

ParamWhat it controls
modelLLM model — string or instance (this is where you set temperature, etc.)
toolsTools available to the agent
systemPromptSystem prompt (string or SystemMessage)
middlewareCustom middleware array
subagentsSub-agent definitions
responseFormatStructured output schema
contextSchemaPer-invocation context schema
checkpointerState persistence
storeLong-term memory store
backendFilesystem backend
interruptOnHuman-in-the-loop config
nameAgent name
memoryMemory file paths (AGENTS.md)
skillsSkill source paths

###4. Things that are hardcoded and not customizable

Several built-in middleware configs are hardcoded inside createDeepAgent() and cannot be overridden:

  • Summarization trigger: 170,000 tokens, keeping 6 messages (agent.ts:269-272)
  • Prompt caching: Anthropic caching enabled with unsupportedModelBehavior: "ignore"
  • Recursion limit: 10,000 (though overridable at invocation time)
  • Built-in middleware stack: todoList, filesystem, subagents, summarization, prompt caching, tool call patching — always included

###TL;DR

  • Temperature/model params: Pass a model instance (new ChatAnthropic({ temperature: 0 })) instead of a string
  • Max steps per turn: Use recursionLimit at invocation time (default is 10k)
  • No maxSteps: This concept doesn't exist in the codebase; recursionLimit is the equivalent
  • Summarization/caching configs: Hardcoded, not exposed for customization through createDeepAgent()

END OF POST

How much can I customize LLM calls in createDeepAgent()? — langchain-ai/deepagentsjs