How to build a Next.js example with subagents and human-in-the-loop?

HEAD
e1a9a62chore: version packages (#192)
This post might have stale content, as HEAD is 183 commits ahead.

Give me a nextjs example of this that includes subagents, a custom tool, and human in the loop for that (including how that decision gets pushed to the client, and back to the agent)

Avatar of anthropic/claude-opus-4.6
Claude Opus 4.6
commented

Now I have everything I need. Here's the comprehensive example:


#Next.js + Deep Agents: Subagents, Custom Tool, and Human-in-the-Loop

Here's a full Next.js example that wires up:

  1. A deep agent with subagents (a research agent and a deployment agent)
  2. A custom tool (deploy to staging)
  3. Human-in-the-loop (HITL) on the deploy tool — including how the interrupt surfaces to the client and how the client's decision gets sent back

##Project Structure

app/
├── api/
│   └── agent/
│       └── route.ts        ← API route: runs the agent, streams events
├── lib/
│   └── agent.ts            ← Agent definition (subagents, tools, HITL)
├── page.tsx                 ← Client UI (chat + approval UI)
└── layout.tsx

##1. app/lib/agent.ts — Agent Setup

This is the server-side agent definition. Key things to note:

  • interruptOn maps tool names to HITL configs. When deploy_to_staging: true, the HITL middleware (source) wraps it so that calling the tool triggers a __interrupt__ on the graph state.
  • Subagents are defined as SubAgent objects with their own systemPrompt, tools, and optionally their own interruptOn (source).
  • The createDeepAgent function composes all built-in middleware (todos, filesystem, subagents, summarization, prompt caching, HITL) automatically (source).
// app/lib/agent.ts
import { tool } from "langchain";
import { z } from "zod";
import { MemorySaver } from "@langchain/langgraph";
import { createDeepAgent, type SubAgent } from "deepagents";

// ─── Custom Tool: Deploy to Staging ─────────────────────────────────────────
// This is the tool that will require human approval before execution.
export const deployToStaging = tool(
  async ({ service, version }: { service: string; version: string }) => {
    // In a real app, this would call your CI/CD API
    return `✅ Deployed ${service}@${version} to staging successfully.`;
  },
  {
    name: "deploy_to_staging",
    description:
      "Deploy a service to the staging environment. Requires human approval.",
    schema: z.object({
      service: z.string().describe("The service name to deploy"),
      version: z.string().describe("The version/tag to deploy"),
    }),
  },
);

// ─── Another tool (no approval needed) ──────────────────────────────────────
const checkServiceHealth = tool(
  async ({ service }: { service: string }) => {
    return `Service "${service}" is healthy. Uptime: 99.97%, latency: 42ms.`;
  },
  {
    name: "check_service_health",
    description: "Check the health/status of a running service.",
    schema: z.object({
      service: z.string().describe("The service name to check"),
    }),
  },
);

// ─── Subagent 1: Research Agent ─────────────────────────────────────────────
// This subagent handles research tasks. It does NOT require HITL.
const researchSubagent: SubAgent = {
  name: "researcher",
  description:
    "Use this agent for researching deployment readiness, changelog analysis, " +
    "and gathering information about services before deployment decisions.",
  systemPrompt: `You are a deployment research assistant. Your job is to:
1. Analyze service health using the check_service_health tool
2. Summarize findings so the orchestrator can make informed deployment decisions.
Only your FINAL message is returned to the orchestrator — be thorough.`,
  tools: [checkServiceHealth],
};

// ─── Subagent 2: Deploy Agent ───────────────────────────────────────────────
// This subagent has the deploy tool WITH its own interruptOn config.
// When the subagent calls deploy_to_staging, the interrupt propagates up
// to the parent graph (see hitl.int.test.ts lines 125-185 for the pattern).
const deploySubagent: SubAgent = {
  name: "deployer",
  description:
    "Use this agent to perform deployments to staging. It will request " +
    "human approval before deploying.",
  systemPrompt: `You are a deployment agent. Use deploy_to_staging to deploy services.
Always confirm the service name and version before deploying.`,
  tools: [deployToStaging],
  // HITL on this subagent's tool — the interrupt bubbles up through the
  // parent graph's task tool invocation (see subagents.ts line 496-510)
  interruptOn: {
    deploy_to_staging: {
      allowedDecisions: ["approve", "edit", "reject"],
    },
  },
};

// ─── Shared Checkpointer ────────────────────────────────────────────────────
// A checkpointer is REQUIRED for HITL — it persists state between the
// interrupt and the resume. In production, use PostgresSaver or similar.
// See: agent.ts line 271 — interruptOn requires a checkpointer.
export const checkpointer = new MemorySaver();

// ─── Create the Deep Agent ──────────────────────────────────────────────────
export const agent = createDeepAgent({
  model: "claude-sonnet-4-5-20250929",
  tools: [checkServiceHealth, deployToStaging],
  systemPrompt: `You are a deployment orchestrator. You help users deploy services safely.

Workflow:
1. Use the "researcher" subagent to check service health first
2. Use the "deployer" subagent to perform the actual deployment
3. Summarize results to the user

Always research before deploying.`,
  subagents: [researchSubagent, deploySubagent],
  // Top-level interruptOn — applies to the main agent AND the default
  // general-purpose subagent (see agent.ts lines 260-268).
  // The deployer subagent has its own interruptOn above.
  interruptOn: {
    deploy_to_staging: {
      allowedDecisions: ["approve", "edit", "reject"],
    },
  },
  checkpointer,
});

###How interruptOn works under the hood

When interruptOn is set, createDeepAgent appends humanInTheLoopMiddleware({ interruptOn }) to the middleware stack (source — agent.ts line 271):

// From agent.ts — the HITL middleware is conditionally added:
const runtimeMiddleware: AgentMiddleware[] = [
  ...builtInMiddleware,
  ...skillsMiddlewareArray,
  ...memoryMiddlewareArray,
  ...(interruptOn ? [humanInTheLoopMiddleware({ interruptOn })] : []),
  ...(customMiddleware as unknown as AgentMiddleware[]),
];

For subagents, each subagent's interruptOn is processed in getSubagents() (source — subagents.ts lines 458-469):

// From subagents.ts — subagent-level HITL:
const interruptOn = agentParams.interruptOn || defaultInterruptOn;
if (interruptOn)
  middleware.push(humanInTheLoopMiddleware({ interruptOn }));

##2. app/api/agent/route.ts — API Route

This route handles two things:

  1. New messages — invokes the agent and streams events
  2. HITL resume — receives the user's decision and sends Command({ resume: ... }) back

The critical part is how __interrupt__ surfaces. When the agent hits an interrupted tool, agent.invoke() returns a result where result.__interrupt__ contains an array of interrupt objects. Each one has a .value that is an HITLRequest with actionRequests (what tool calls need approval) and reviewConfigs (what decisions are allowed) — see hitl.int.test.ts lines 68-95.

// app/api/agent/route.ts
import { NextRequest } from "next/server";
import { Command } from "@langchain/langgraph";
import { agent } from "@/app/lib/agent";

export const maxDuration = 120; // Vercel timeout

// ─── Types for the client ───────────────────────────────────────────────────
// These mirror the HITLRequest shape from langchain's HITL middleware.
// See hitl.int.test.ts lines 70-95 for the full shape.
interface ActionRequest {
  name: string;  // tool name, e.g. "deploy_to_staging"
  args: Record<string, unknown>; // tool call arguments
}

interface ReviewConfig {
  actionName: string;
  allowedDecisions: string[]; // e.g. ["approve", "edit", "reject"]
}

interface HITLInterrupt {
  actionRequests: ActionRequest[];
  reviewConfigs: ReviewConfig[];
}

export async function POST(req: NextRequest) {
  const body = await req.json();
  const { threadId, message, resume } = body as {
    threadId: string;
    message?: string;
    resume?: { decisions: Array<{ type: string; args?: Record<string, unknown> }> };
  };

  const config = { configurable: { thread_id: threadId } };

  // ─── Prepare the input ──────────────────────────────────────────────────
  let input: any;

  if (resume) {
    // HITL RESUME: The client is sending back a decision (approve/reject/edit).
    // This is passed as Command({ resume: ... }) which LangGraph routes back
    // to the interrupted node. See hitl.int.test.ts lines 100-108.
    input = new Command({
      resume: {
        decisions: resume.decisions,
      },
    });
  } else {
    // Normal message
    input = {
      messages: [{ role: "user", content: message }],
    };
  }

  // ─── Stream the response ────────────────────────────────────────────────
  const encoder = new TextEncoder();
  const stream = new ReadableStream({
    async start(controller) {
      try {
        // Use invoke (not stream) to get the full result including __interrupt__
        const result = await agent.invoke(input, config);

        // ── Check for HITL interrupt ────────────────────────────────────
        // When a tool configured with interruptOn is called, the graph
        // pauses and returns __interrupt__ on the result.
        // See: hitl.int.test.ts line 68 — result.__interrupt__ is defined
        if (result.__interrupt__ && result.__interrupt__.length > 0) {
          // The interrupt value is an HITLRequest (from langchain's HITL middleware)
          // It contains actionRequests (pending tool calls) and reviewConfigs
          // (what decisions are allowed per tool).
          const hitlRequest = result.__interrupt__[0].value as HITLInterrupt;

          // Send the interrupt to the client so it can render an approval UI
          controller.enqueue(
            encoder.encode(
              `data: ${JSON.stringify({
                type: "hitl_interrupt",
                threadId,
                actionRequests: hitlRequest.actionRequests,
                reviewConfigs: hitlRequest.reviewConfigs,
              })}\n\n`,
            ),
          );
        } else {
          // ── Normal completion: stream the final messages ────────────────
          const lastMessage = result.messages[result.messages.length - 1];
          controller.enqueue(
            encoder.encode(
              `data: ${JSON.stringify({
                type: "message",
                content:
                  typeof lastMessage.content === "string"
                    ? lastMessage.content
                    : JSON.stringify(lastMessage.content),
              })}\n\n`,
            ),
          );
        }

        controller.enqueue(encoder.encode("data: [DONE]\n\n"));
        controller.close();
      } catch (error: any) {
        controller.enqueue(
          encoder.encode(
            `data: ${JSON.stringify({
              type: "error",
              message: error.message,
            })}\n\n`,
          ),
        );
        controller.close();
      }
    },
  });

  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      Connection: "keep-alive",
    },
  });
}

###The Interrupt → Resume lifecycle

Here's the exact flow, traced through the source:

  1. Agent calls deploy_to_staging → HITL middleware intercepts because interruptOn has deploy_to_staging configured
  2. agent.invoke() returns with result.__interrupt__ populated — see hitl.int.test.ts line 68
  3. Server sends hitl_interrupt SSE event to the client with actionRequests and reviewConfigs
  4. Client renders approval UI showing what tool is being called and what decisions are allowed
  5. User clicks approve/reject → client POSTs { threadId, resume: { decisions: [{ type: "approve" }] } }
  6. Server creates new Command({ resume: { decisions: [...] } }) and passes it to agent.invoke() — see hitl.int.test.ts lines 100-108
  7. LangGraph routes the resume back to the paused node, the tool executes (or is rejected), and the agent continues

For subagents, the flow is identical — the interrupt bubbles up from the subagent graph through the task tool invocation. The parent agent.invoke() still returns __interrupt__ and Command({ resume }) still reaches the right subagent. See hitl.int.test.ts lines 125-185.


##3. app/page.tsx — Client UI

// app/page.tsx
"use client";

import { useState, useRef, useCallback } from "react";
import { v4 as uuidv4 } from "uuid";

// ─── Types matching the server's SSE events ────────────────────────────────
interface ActionRequest {
  name: string;
  args: Record<string, unknown>;
}

interface ReviewConfig {
  actionName: string;
  allowedDecisions: string[];
}

interface HITLPending {
  threadId: string;
  actionRequests: ActionRequest[];
  reviewConfigs: ReviewConfig[];
}

interface ChatMessage {
  role: "user" | "assistant" | "system";
  content: string;
}

export default function Home() {
  const [messages, setMessages] = useState<ChatMessage[]>([]);
  const [input, setInput] = useState("");
  const [loading, setLoading] = useState(false);

  // HITL state — when non-null, we show the approval UI
  const [hitlPending, setHitlPending] = useState<HITLPending | null>(null);

  // Stable thread ID for the conversation
  const threadIdRef = useRef(uuidv4());

  // ─── Send a message or resume to the agent ─────────────────────────────
  const sendToAgent = useCallback(
    async (body: Record<string, unknown>) => {
      setLoading(true);
      try {
        const res = await fetch("/api/agent", {
          method: "POST",
          headers: { "Content-Type": "application/json" },
          body: JSON.stringify({ threadId: threadIdRef.current, ...body }),
        });

        const reader = res.body!.getReader();
        const decoder = new TextDecoder();
        let buffer = "";

        while (true) {
          const { done, value } = await reader.read();
          if (done) break;

          buffer += decoder.decode(value, { stream: true });
          const lines = buffer.split("\n\n");
          buffer = lines.pop() || "";

          for (const line of lines) {
            if (!line.startsWith("data: ")) continue;
            const data = line.slice(6);
            if (data === "[DONE]") continue;

            const event = JSON.parse(data);

            if (event.type === "message") {
              setMessages((prev) => [
                ...prev,
                { role: "assistant", content: event.content },
              ]);
            } else if (event.type === "hitl_interrupt") {
              // ── HITL Interrupt received! ──────────────────────────────
              // The agent tried to call a tool that requires approval.
              // We store the interrupt data and render the approval UI.
              setHitlPending({
                threadId: event.threadId,
                actionRequests: event.actionRequests,
                reviewConfigs: event.reviewConfigs,
              });
            } else if (event.type === "error") {
              setMessages((prev) => [
                ...prev,
                { role: "system", content: `Error: ${event.message}` },
              ]);
            }
          }
        }
      } finally {
        setLoading(false);
      }
    },
    [],
  );

  // ─── Handle sending a chat message ─────────────────────────────────────
  const handleSend = async () => {
    if (!input.trim()) return;
    const userMessage = input.trim();
    setInput("");
    setMessages((prev) => [...prev, { role: "user", content: userMessage }]);
    await sendToAgent({ message: userMessage });
  };

  // ─── Handle HITL decision ──────────────────────────────────────────────
  // This sends Command({ resume: { decisions } }) back to the agent.
  // The server reconstructs the LangGraph Command (see route.ts).
  // The pattern matches hitl.int.test.ts lines 100-108:
  //   new Command({ resume: { decisions: [{ type: "approve" }] } })
  const handleHITLDecision = async (
    decisionType: "approve" | "reject" | "edit",
  ) => {
    if (!hitlPending) return;

    // Build one decision per actionRequest (same order)
    const decisions = hitlPending.actionRequests.map(() => ({
      type: decisionType,
    }));

    // Show what the user decided
    setMessages((prev) => [
      ...prev,
      {
        role: "system",
        content: `🔒 Decision: ${decisionType.toUpperCase()} for ${hitlPending.actionRequests.map((a) => `${a.name}(${JSON.stringify(a.args)})`).join(", ")}`,
      },
    ]);

    // Clear the HITL UI
    setHitlPending(null);

    // Resume the agent with the decision
    await sendToAgent({
      resume: { decisions },
    });
  };

  return (
    <main className="max-w-2xl mx-auto p-6 space-y-4">
      <h1 className="text-2xl font-bold">🤖 Deploy Agent</h1>
      <p className="text-gray-500">
        Uses subagents for research & deployment, with human approval for
        deploys.
      </p>

      {/* ── Chat Messages ────────────────────────────────────────────── */}
      <div className="border rounded-lg p-4 h-96 overflow-y-auto space-y-3">
        {messages.map((msg, i) => (
          <div
            key={i}
            className={`p-3 rounded-lg ${
              msg.role === "user"
                ? "bg-blue-100 ml-12"
                : msg.role === "system"
                  ? "bg-yellow-50 border border-yellow-200 text-sm"
                  : "bg-gray-100 mr-12"
            }`}
          >
            <span className="text-xs font-semibold text-gray-500">
              {msg.role}
            </span>
            <p className="whitespace-pre-wrap">{msg.content}</p>
          </div>
        ))}
        {loading && (
          <div className="text-gray-400 animate-pulse">Thinking...</div>
        )}
      </div>

      {/* ── HITL Approval UI ─────────────────────────────────────────── */}
      {/* This renders when the agent hits an interrupt (deploy_to_staging). */}
      {hitlPending && (
        <div className="border-2 border-orange-400 rounded-lg p-4 bg-orange-50">
          <h3 className="font-bold text-orange-700">
            ⚠️ Approval Required
          </h3>
          <p className="text-sm text-gray-600 mt-1">
            The agent wants to execute the following action(s):
          </p>

          {hitlPending.actionRequests.map((action, i) => (
            <div
              key={i}
              className="mt-2 p-3 bg-white rounded border font-mono text-sm"
            >
              <div className="font-bold">{action.name}</div>
              <pre className="text-xs mt-1 text-gray-600">
                {JSON.stringify(action.args, null, 2)}
              </pre>
              <div className="text-xs text-gray-400 mt-1">
                Allowed:{" "}
                {hitlPending.reviewConfigs
                  .find((rc) => rc.actionName === action.name)
                  ?.allowedDecisions.join(", ")}
              </div>
            </div>
          ))}

          <div className="flex gap-2 mt-4">
            {/* Only show buttons for allowed decisions */}
            {hitlPending.reviewConfigs[0]?.allowedDecisions.includes(
              "approve",
            ) && (
              <button
                onClick={() => handleHITLDecision("approve")}
                className="px-4 py-2 bg-green-600 text-white rounded hover:bg-green-700"
              >
                ✅ Approve
              </button>
            )}
            {hitlPending.reviewConfigs[0]?.allowedDecisions.includes(
              "reject",
            ) && (
              <button
                onClick={() => handleHITLDecision("reject")}
                className="px-4 py-2 bg-red-600 text-white rounded hover:bg-red-700"
              >
                ❌ Reject
              </button>
            )}
            {hitlPending.reviewConfigs[0]?.allowedDecisions.includes(
              "edit",
            ) && (
              <button
                onClick={() => handleHITLDecision("edit")}
                className="px-4 py-2 bg-yellow-600 text-white rounded hover:bg-yellow-700"
              >
                ✏️ Edit
              </button>
            )}
          </div>
        </div>
      )}

      {/* ── Input ────────────────────────────────────────────────────── */}
      <div className="flex gap-2">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyDown={(e) => e.key === "Enter" && handleSend()}
          placeholder="e.g. Deploy auth-service v2.3.1 to staging"
          className="flex-1 border rounded-lg px-4 py-2"
          disabled={loading || !!hitlPending}
        />
        <button
          onClick={handleSend}
          disabled={loading || !!hitlPending}
          className="px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 disabled:opacity-50"
        >
          Send
        </button>
      </div>
    </main>
  );
}

##How it all connects — the full data flow

┌─────────────┐         POST /api/agent              ┌──────────────────┐
│             │  { threadId, message }                │                  │
│   Client    │ ──────────────────────────────────►   │   API Route      │
│  (page.tsx) │                                       │   (route.ts)     │
│             │         SSE stream                    │                  │
│             │  ◄────────────────────────────────    │   agent.invoke() │
│             │  { type: "hitl_interrupt",            │        │         │
│             │    actionRequests, reviewConfigs }     │        ▼         │
│             │                                       │  ┌────────────┐  │
│ Shows       │                                       │  │ Deep Agent │  │
│ Approval UI │                                       │  │            │  │
│             │                                       │  │ ┌────────┐ │  │
│ User clicks │         POST /api/agent               │  │ │deployer│ │  │
│ "Approve"   │  { threadId, resume: {                │  │ │subagent│ │  │
│             │    decisions: [{ type: "approve" }]    │  │ └───┬────┘ │  │
│             │  }}                                    │  │     │      │  │
│             │ ──────────────────────────────────►   │  │  interrupt │  │
│             │                                       │  │  bubbles   │  │
│             │         SSE stream                    │  │  up via    │  │
│             │  ◄────────────────────────────────    │  │  task tool │  │
│             │  { type: "message",                   │  └────────────┘  │
│             │    content: "✅ Deployed!" }           │                  │
└─────────────┘                                       └──────────────────┘

###Key source references:

ConceptSource FileKey Lines
createDeepAgent composes middlewareagent.tsLines 230-280
HITL middleware added conditionallyagent.tsLine 271
SubAgent interface (name, tools, interruptOn)subagents.tsLines 260-300
Subagent HITL propagationsubagents.tsLines 458-469
__interrupt__ shape (actionRequests, reviewConfigs)hitl.int.test.tsLines 68-95
Resume with Command({ resume: { decisions } })hitl.int.test.tsLines 100-108
Subagent HITL resume flowhitl.int.test.tsLines 125-185
interrupt() primitive in subagent toolssubagents-hitl.int.test.tsLines 42-65
Task tool invokes subagent & propagates statesubagents.tsLines 484-510

###Alternative: Using interrupt() directly in a tool

Instead of the declarative interruptOn config, you can also use the interrupt() primitive directly inside a tool (as shown in subagents-hitl.int.test.ts lines 42-65). This gives you more control over what data is sent to the client:

import { tool, interrupt } from "langchain";

const deployToStaging = tool(
  async ({ service, version }) => {
    // interrupt() pauses execution and surfaces to the client.
    // The value you pass becomes result.__interrupt__[0].value
    const approval = interrupt({
      type: "deploy_approval",
      service,
      version,
      message: `Approve deploying ${service}@${version} to staging?`,
    }) as { approved?: boolean; reason?: string };

    if (approval?.approved) {
      return `✅ Deployed ${service}@${version} to staging.`;
    }
    return `❌ Deployment rejected. Reason: ${approval?.reason ?? "none"}`;
  },
  { name: "deploy_to_staging", description: "...", schema: z.object({...}) },
);

With this approach, resume uses Command({ resume: { approved: true } }) instead of the decisions array — the resume value is whatever shape you expect inside the interrupt() call. See the full pattern in subagents-hitl.int.test.ts lines 155-210.


END OF POST

How to build a Next.js example with subagents and human-in-the-loop? — langchain-ai/deepagentsjs