Can deepagentsjs be deployed to Next.js + Vercel?
Can this be deployed to nextjs + vercel?
Based on my thorough exploration of the codebase, here's a detailed answer:
##Short answer: Yes, but with important caveats.
deepagents is a TypeScript library built on LangGraph and LangChain, so it can be used in a Next.js project deployed to Vercel — but you need to be thoughtful about which backend you choose, because the library's architecture has components with varying runtime requirements.
###What works well on Vercel
The core createDeepAgent() function (libs/deepagents/src/agent.ts) is a pure TypeScript function that returns a LangGraph agent. The key dependencies (@langchain/core, @langchain/langgraph, langchain, zod) are all compatible with serverless environments.
Two backends are serverless-friendly:
-
StateBackend(libs/deepagents/src/backends/state.ts) — Stores files in LangGraph agent state (ephemeral, in-memory). No filesystem access needed. This is the default backend when you don't specify one:// From agent.ts line ~143: const filesystemBackend = backend ? backend : (config) => new StateBackend(config); -
StoreBackend(libs/deepagents/src/backends/store.ts) — Uses LangGraph'sBaseStorefor persistent, cross-thread storage. Works with remote stores (e.g., LangGraph Cloud) without local filesystem access.
###What does NOT work on Vercel (without workarounds)
-
FilesystemBackend(libs/deepagents/src/backends/filesystem.ts) — Directly usesnode:fs/promises,node:fs,node:path, andnode:child_process(for ripgrep-powered grep). Vercel serverless functions have a read-only filesystem and no persistent disk, so this backend won't work as intended. -
Sandbox providers (Deno, Modal, Daytona, node-vfs) — These are designed for code execution environments. The Deno and Modal sandbox providers communicate with external services via HTTP, so they could work from Vercel, but the node-vfs and local sandbox providers won't.
-
Execution timeout — Vercel serverless functions have execution time limits (10s on Hobby, 60s on Pro, 900s on Enterprise). Deep agents with planning, sub-agents, and multi-step reasoning can easily exceed these limits. You'd need to either:
- Use Vercel's streaming (
maxDurationconfig + streaming responses) - Offload the agent execution to LangGraph Cloud or a long-running backend
- Use Vercel's streaming (
###Recommended approach for Next.js + Vercel
// app/api/agent/route.ts (Next.js API route)
import { createDeepAgent } from "deepagents";
const agent = createDeepAgent({
model: "claude-sonnet-4-5-20250929",
// StateBackend is the default — no filesystem needed
// Or use StoreBackend with a remote LangGraph store
tools: [/* your tools */],
systemPrompt: "Your prompt here",
});
export async function POST(req: Request) {
const { messages } = await req.json();
// Stream the response to avoid timeout issues
const stream = agent.streamEvents(
{ messages },
{ version: "v2", configurable: { thread_id: "some-id" } }
);
// Return streaming response
// ...
}
###Summary
| Component | Vercel Compatible? | Notes |
|---|---|---|
createDeepAgent() core | ✅ Yes | Pure TypeScript/LangGraph |
StateBackend (default) | ✅ Yes | In-memory, ephemeral |
StoreBackend | ✅ Yes | With remote LangGraph store |
FilesystemBackend | ❌ No | Needs node:fs, child_process |
| Sandbox providers (Deno/Modal) | ⚠️ Maybe | They call external APIs via HTTP |
node-vfs sandbox | ❌ No | Needs Node.js VFS polyfill |
| Long-running agents | ⚠️ Careful | Use streaming + maxDuration |
The default configuration (no backend specified → StateBackend) should work out of the box in a Next.js API route on Vercel, as long as you handle streaming properly to stay within timeout limits.
How do sandbox providers work? Are they not required? Is the sandbox binding separate from the file system? Tell me more about that architecture
Now I have a thorough understanding of the entire architecture. Let me put this together.
Great questions — the architecture here is really cleanly layered. Let me break it down.
##The Two-Layer Architecture: Backends vs. Sandboxes
The system separates concerns into two distinct interfaces defined in libs/deepagents/src/backends/protocol.ts:
###Layer 1: BackendProtocol — The File System
This is the base interface that every backend must implement. It provides a virtual file system for the agent:
interface BackendProtocol {
lsInfo(path: string): MaybePromise<FileInfo[]>;
read(filePath: string, offset?, limit?): MaybePromise<string>;
readRaw(filePath: string): MaybePromise<FileData>;
write(filePath: string, content: string): MaybePromise<WriteResult>;
edit(filePath: string, oldStr, newStr, replaceAll?): MaybePromise<EditResult>;
grepRaw(pattern, path?, glob?): MaybePromise<GrepMatch[] | string>;
globInfo(pattern, path?): MaybePromise<FileInfo[]>;
uploadFiles?(files): MaybePromise<FileUploadResponse[]>; // optional
downloadFiles?(paths): MaybePromise<FileDownloadResponse[]>; // optional
}
This powers the agent's ls, read_file, write_file, edit_file, grep, and glob tools. Every deep agent always has these tools — they come from the createFilesystemMiddleware in libs/deepagents/src/middleware/fs.ts.
There are four implementations of BackendProtocol:
| Backend | Storage | Persistence | Where data lives |
|---|---|---|---|
StateBackend | LangGraph state | Ephemeral (per-thread) | In-memory, checkpointed with agent state |
StoreBackend | LangGraph BaseStore | Persistent (cross-thread) | Remote store / database |
FilesystemBackend | Real disk | Persistent | Local node:fs |
CompositeBackend | Routes by path prefix | Mixed | Delegates to other backends |
###Layer 2: SandboxBackendProtocol — Command Execution (Optional!)
This extends BackendProtocol with exactly one addition:
interface SandboxBackendProtocol extends BackendProtocol {
execute(command: string): MaybePromise<ExecuteResponse>;
readonly id: string;
}
A sandbox is a backend that also lets the agent run shell commands. The key insight: sandboxes are not required. They are an opt-in capability.
##How the execute Tool Gets Conditionally Added
This is the clever part. Look at createFilesystemMiddleware in libs/deepagents/src/middleware/fs.ts (lines ~800-840). The execute tool is always registered in the tool list, but at runtime, the middleware's wrapModelCall hook checks the backend:
wrapModelCall: async (request, handler) => {
const resolvedBackend = getBackend(backend, stateAndStore);
const supportsExecution = isSandboxBackend(resolvedBackend); // ← type guard
// Filter tools based on backend capabilities
let tools = request.tools;
if (!supportsExecution) {
tools = tools.filter((t) => t.name !== "execute"); // ← removed!
}
// ...
}
And the isSandboxBackend type guard (protocol.ts line ~267) is just a duck-type check:
function isSandboxBackend(backend: BackendProtocol): backend is SandboxBackendProtocol {
return typeof backend.execute === "function" && typeof backend.id === "string";
}
So if you use a StateBackend (the default), the agent gets 6 file tools. If you use a sandbox backend, it gets those 6 plus execute.
##The BaseSandbox Abstract Class
All sandbox providers extend BaseSandbox (libs/deepagents/src/backends/sandbox.ts), which is an abstract class that implements the entire BackendProtocol by shelling out Node.js commands via execute(). It builds inline Node.js scripts (using btoa-encoded strings) for every file operation — read, write, edit, ls, grep, glob — and runs them through whatever execute() the concrete class provides.
So a concrete sandbox provider only needs to implement three abstract methods:
execute(command: string)— run a shell commanduploadFiles(files)— upload binary filesdownloadFiles(paths)— download binary files
The three concrete providers are:
| Provider | Package | Where commands run |
|---|---|---|
DenoSandbox | @langchain/deno | Deno Deploy microVM (cloud, via @deno/sandbox SDK) |
ModalSandbox | @langchain/modal | Modal container (cloud, via modal SDK) |
VfsSandbox | @langchain/node-vfs | Local in-memory VFS (via node-vfs-polyfill) |
##Layer 3: SandboxProvider — Lifecycle Management (Also Optional)
There's a third, separate interface for managing sandbox instances:
interface SandboxProvider {
list(options?): Promise<SandboxListResponse>;
getOrCreate(options?): Promise<SandboxBackendProtocol>;
delete(options): Promise<void>;
}
This separates "running commands in a sandbox" from "creating/listing/deleting sandboxes." You'd use this if you need to manage sandbox lifecycles programmatically (e.g., a multi-tenant app).
##The Big Picture
createDeepAgent({ backend: ??? })
│
▼
┌─────────────────────────────┐
│ createFilesystemMiddleware │ ← always added
│ │
│ Tools: ls, read_file, │
│ write_file, edit_file, │
│ grep, glob │ ← always available
│ │
│ Tool: execute │ ← only if isSandboxBackend()
└──────────┬──────────────────┘
│
▼
┌───────────────┐ BackendProtocol only:
│ BackendProtocol│ ←── StateBackend (default), StoreBackend,
└───────┬───────┘ FilesystemBackend, CompositeBackend
│
│ extends
▼
┌────────────────────┐ BackendProtocol + execute():
│SandboxBackendProtocol│←── DenoSandbox, ModalSandbox, VfsSandbox
└────────────────────┘ (all extend BaseSandbox)
Sandboxes are not required. The default StateBackend gives you a fully functional agent with an in-memory file system — no sandbox, no shell execution, no external services. The file system is a core part of the agent architecture (for planning, memory, skill storage, etc.), while command execution is an optional capability you opt into by providing a sandbox backend.
END OF POST