How does human in the loop work?
how does human in the loop work?
Now I have a comprehensive understanding. Let me provide you with a complete explanation of how human-in-the-loop works in the Vercel AI SDK.
#Human-in-the-Loop (HITL) in the Vercel AI SDK
The AI SDK provides two main approaches for implementing human-in-the-loop functionality when you need user approval before executing tool actions:
##Approach 1: Client-Side Tool Interception (Manual)
This approach works by omitting the execute function from tools that require approval. The tool call is intercepted on the client, and the user can approve or deny it.
###How It Works
- Define a tool without an
executefunction:
// tools.ts
import { tool, ToolSet } from 'ai';
import { z } from 'zod';
const getWeatherInformation = tool({
description: 'show the weather in a given city to the user',
inputSchema: z.object({ city: z.string() }),
outputSchema: z.string(),
// NO execute function → requires human confirmation
});
const getLocalTime = tool({
description: 'get the local time for a specified location',
inputSchema: z.object({ location: z.string() }),
// HAS execute function → runs automatically
execute: async ({ location }) => '10am',
});
export const tools = { getWeatherInformation, getLocalTime } satisfies ToolSet;
- On the frontend, intercept the tool call and show Yes/No buttons using
addToolOutput:
// page.tsx
const { messages, sendMessage, addToolOutput } = useChat(...);
// In render:
if (isStaticToolUIPart(part) && part.state === 'input-available') {
return (
<div>
Run {toolName} with {JSON.stringify(part.input)}?
<button onClick={async () => {
await addToolOutput({
toolCallId: part.toolCallId,
tool: toolName,
output: 'Yes, confirmed.',
});
sendMessage(); // Trigger next request
}}>Yes</button>
<button onClick={async () => {
await addToolOutput({
toolCallId: part.toolCallId,
tool: toolName,
output: 'No, denied.',
});
sendMessage();
}}>No</button>
</div>
);
}
- On the backend, check the tool output and execute the tool if approved:
// route.ts
const lastMessage = messages[messages.length - 1];
for (const part of lastMessage.parts) {
if (isStaticToolUIPart(part) && part.state === 'output-available') {
if (part.output === 'Yes, confirmed.') {
const result = await executeWeatherTool(part.input);
writer.write({
type: 'tool-output-available',
toolCallId: part.toolCallId,
output: result,
});
} else if (part.output === 'No, denied.') {
writer.write({
type: 'tool-output-available',
toolCallId: part.toolCallId,
output: 'Error: User denied access',
});
}
}
}
##Approach 2: Built-in needsApproval (Recommended)
This is a more streamlined approach using the built-in needsApproval property on tools.
###How It Works
- Define a tool with
needsApproval: true:
import { tool } from 'ai';
const shellCommand = tool({
description: 'Execute a shell command',
inputSchema: z.object({
command: z.string().describe('The shell command to execute'),
}),
needsApproval: true, // ← Requires user approval
execute: async ({ command }) => {
// Executes ONLY after approval
return await runCommand(command);
},
});
You can also use a dynamic function to conditionally require approval:
needsApproval: async ({ amount }) => amount > 1000, // Only require approval for large amounts
-
The flow:
- Model generates a tool call
generateText/streamTextreturns withtool-approval-requestparts- Your app collects user approval
- Add
tool-approval-responseto messages - Call
generateText/streamTextagain - If approved → tool executes; If denied → model sees denial
-
On the frontend, use
addToolApprovalResponse:
const { messages, addToolApprovalResponse } = useChat({
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithApprovalResponses,
});
// In render:
if (part.state === 'approval-requested') {
return (
<div>
Tool {part.toolCall.toolName} wants to run. Approve?
<button onClick={() => addToolApprovalResponse({
id: part.approval.id,
approved: true,
})}>Approve</button>
<button onClick={() => addToolApprovalResponse({
id: part.approval.id,
approved: false,
reason: 'User declined', // Optional
})}>Deny</button>
</div>
);
}
##Key Differences
| Feature | Client-Side Interception | needsApproval |
|---|---|---|
| Execute function | Omitted from tool | Defined on tool |
| Approval mechanism | Custom strings via addToolOutput | Built-in via addToolApprovalResponse |
| Tool part state | input-available → output-available | approval-requested → approved/denied |
| Dynamic conditions | Manual logic | needsApproval: (args) => boolean |
| Best for | Full control, custom flows | Simpler, standardized approval |
##Example Files in the Repository
- Cookbook documentation:
content/cookbook/01-next/75-human-in-the-loop.mdx - Working example:
what about the generator / yield pattern?
Now I have a comprehensive understanding. Let me provide the complete explanation of the generator/yield pattern:
#Generator/Yield Pattern for Preliminary Tool Results
The generator/yield pattern in the AI SDK allows you to stream incremental progress or status updates from tool execution to the frontend. This is especially useful for long-running tools where you want to show loading states, progress indicators, or partial results.
##How It Works
###1. Use async function* as the Execute Function
Instead of a regular async function, use an async generator function for your tool's execute:
import { tool } from 'ai';
import { z } from 'zod';
const weatherTool = tool({
description: 'Get the weather in a location',
inputSchema: z.object({ city: z.string() }),
async *execute({ city }) {
// First yield: loading state
yield { state: 'loading' as const };
// Simulate API call
await new Promise(resolve => setTimeout(resolve, 2000));
// Final yield: the actual result
yield {
state: 'ready' as const,
temperature: 72,
weather: 'sunny',
};
},
});
###2. How the SDK Processes It
The SDK's internal executeTool function handles both regular functions and async generators:
// From packages/provider-utils/src/types/execute-tool.ts
export async function* executeTool<INPUT, OUTPUT>({ execute, input, options }) {
const result = execute(input, options);
if (isAsyncIterable(result)) {
let lastOutput: OUTPUT | undefined;
for await (const output of result) {
lastOutput = output;
yield { type: 'preliminary', output }; // Each yield becomes preliminary
}
yield { type: 'final', output: lastOutput! }; // Last one becomes final
} else {
yield { type: 'final', output: await result };
}
}
Key behaviors:
- Every
yieldis treated as a preliminary result - The last yielded value becomes the final tool result
- Preliminary results are streamed to the frontend with
preliminary: true
###3. Stream Processing
When processing the stream, preliminary results are distinguished from final results:
for await (const chunk of result.fullStream) {
if (chunk.type === 'tool-result') {
if (chunk.preliminary) {
console.log('Preliminary:', chunk.output); // Loading states, progress
} else {
console.log('Final:', chunk.output); // Actual result
}
}
}
##Combining with needsApproval
You can combine the generator pattern with the approval workflow:
// From examples/next-openai/tool/weather-tool-with-approval.ts
export const weatherToolWithApproval = tool({
description: 'Get the weather in a location',
inputSchema: z.object({ city: z.string() }),
needsApproval: true, // ← Requires approval first
async *execute() { // ← Then uses generator for progress
yield { state: 'loading' as const };
await new Promise(resolve =>
setTimeout(resolve, 1000 + Math.floor(Math.random() * 4000)),
);
yield {
state: 'ready' as const,
temperature: 72,
weather: randomWeather(),
};
},
});
Flow:
- User asks a question → Model generates tool call
- Model returns
tool-approval-request→ User sees approval UI - User approves → Tool
executeruns - Each
yieldsends apreliminaryresult to frontend (loading states) - Final
yieldbecomes the actual tool result
##Frontend Usage
On the frontend, you can render different UI based on the tool's output state:
// Check the output to render appropriate UI
if (part.type === 'tool-getWeather' && part.state === 'output-available') {
const output = part.output;
if (output.state === 'loading') {
return <Spinner />;
}
if (output.state === 'ready') {
return <WeatherCard temperature={output.temperature} weather={output.weather} />;
}
}
The preliminary flag on the part indicates whether the result is still updating:
if (part.preliminary) {
// Still loading, more updates coming
} else {
// This is the final result
}
##Full Example with Output Schema
For type-safety, you can define an output schema that covers all possible states:
// From examples/ai-functions/src/stream-text/openai-tool-output-stream.ts
const weatherTool = tool({
description: 'Get the current weather.',
inputSchema: z.object({ location: z.string() }),
outputSchema: z.union([
z.object({
status: z.literal('loading'),
text: z.string(),
weather: z.undefined(),
}),
z.object({
status: z.literal('success'),
text: z.string(),
weather: z.object({
location: z.string(),
temperature: z.number(),
}),
}),
]),
async *execute({ location }) {
yield {
status: 'loading' as const,
text: `Getting weather for ${location}`,
weather: undefined,
};
await new Promise(resolve => setTimeout(resolve, 3000));
const temperature = 72 + Math.floor(Math.random() * 21) - 10;
yield {
status: 'success' as const,
text: `The weather in ${location} is ${temperature}°F`,
weather: { location, temperature },
};
},
});
##Use Cases
- Loading States: Show a spinner while the tool fetches data
- Progress Indicators: Stream completion percentage for long tasks
- Subagent Streaming: Stream intermediate results from a subagent's work
- Multi-step Operations: Show each step (e.g., "Cloning repo... Building... Deploying...")
##Key Files in the Repository
- Documentation:
content/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx(search for "Preliminary Tool Results") - Core Implementation:
packages/provider-utils/src/types/execute-tool.ts - Tool Call Execution:
packages/ai/src/generate-text/execute-tool-call.ts - Example Tool:
examples/next-openai/tool/weather-tool-with-approval.ts - Example Stream Consumer:
examples/ai-functions/src/stream-text/openai-tool-output-stream.ts
END OF POST