Are files written into the sandbox with createBashTool?
if i do import { Sandbox } from "@vercel/sandbox";
const sandbox = await Sandbox.create(); // Files are written to ./workspace by default const { tools } = await createBashTool({ sandbox, files: { "index.ts": "console.log('hello');" }, });
are those files (index.ts) written into the sandbox? what if they're already there?
Now I have the full picture. Here's the answer:
Yes, index.ts is written into the sandbox. Here's exactly what happens:
###The write path
Since you're passing in an external sandbox (a @vercel/sandbox instance), the code hits this branch in src/tool.ts (lines 89–116). It:
- Iterates over your
filesrecord viastreamFiles(), which yields each entry as{ path: "index.ts", content: Buffer }. - Prepends the destination directory, making the final path:
path.posix.join(destination, "index.ts")→/vercel/sandbox/workspace/index.ts(since you're using a Vercel sandbox, the default destination is/vercel/sandbox/workspace). - Writes them in batches of 20 via
sandbox.writeFiles(batch).
The actual write is delegated to the Vercel sandbox's own writeFiles method (src/sandbox/vercel.ts lines 78–86), which just calls vercelSandbox.writeFiles(...) from @vercel/sandbox.
###What if the file already exists?
There's no check for pre-existing files in this codebase. The code unconditionally calls writeFiles with whatever you pass in files. So:
- It will overwrite. The
@vercel/sandboxwriteFilesAPI is a straightforward "write these files at these paths" operation — if a file already exists at/vercel/sandbox/workspace/index.ts, it gets replaced with the new content. - There's no diff, no merge, no error — it's a blind overwrite.
###Bonus: inline files override uploadDirectory
If you happen to also use uploadDirectory alongside files, the streamFiles generator yields inline files first and tracks their paths. Directory files with the same relative path are skipped. So your inline files always win.
what about Skills? how do they relate to vercel sandbox with bash-tool?
Now I have a clear picture. Here's the full explanation:
##Skills: Modular "Plugins" for the Bash Tool
Skills are a way to package reusable capabilities (scripts, instructions, resources) that an AI agent can discover at runtime and execute via the bash tool. They're marked as experimental_createSkillTool in the exports.
###The Structure
A skill is just a directory with a SKILL.md file containing YAML frontmatter (name, description) and a markdown body with instructions. It can also contain scripts or any other files. For example, from the example:
skills/
csv/
SKILL.md # frontmatter + instructions
scripts/
analyze.sh
filter.sh
select.sh
sort.sh
text/
SKILL.md
scripts/
...
###How Skills Connect to the Sandbox / Bash Tool
Skills don't directly interact with the sandbox. Instead, they act as a bridge that feeds files and instructions into createBashTool. Here's the flow (src/skill-tool.ts):
Step 1: Discover & collect
const { skill, skills, files, instructions } = await createSkillTool({
skillsDirectory: "./skills",
});
This:
- Scans
skillsDirectoryfor subdirectories containingSKILL.mdfiles (discoverSkills). - Reads all files from each skill directory into a
files: Record<string, string>— keyed by relative sandbox paths like./skills/csv/scripts/analyze.sh→ file content (lines 67–82 of skill-tool.ts). - Generates
instructionstext listing the skill paths for the LLM (generateSkillInstructions). - Creates a
skillAI SDK tool that the LLM can call to load a specific skill's instructions on demand.
Step 2: Pass into createBashTool
const { tools } = await createBashTool({
sandbox, // your Vercel sandbox (or omit for just-bash)
files, // ← skill files get written into the sandbox
extraInstructions: instructions, // ← tells the LLM where skills live
});
This is exactly the same files mechanism you asked about earlier — the skill scripts (analyze.sh, filter.sh, etc.) get written into the sandbox at paths like /vercel/sandbox/workspace/./skills/csv/scripts/analyze.sh.
Step 3: At runtime, the LLM uses both tools
const result = await generateText({
model,
tools: { skill, ...tools }, // skill tool + bash/readFile/writeFile
prompt: "Analyze this CSV...",
});
The flow at runtime is:
- The LLM sees the
skilltool description, which lists available skills (e.g.,skill("csv"): Analyze and transform CSV data). - The LLM calls
skill({ skillName: "csv" })— this reads theSKILL.mdbody from the local filesystem (not the sandbox!) and returns the instructions + file list (src/tools/skill.tslines 57–82). - Now the LLM knows how to use the skill (from the SKILL.md instructions) and runs the scripts via the
bashtool, e.g.,bash ./skills/csv/scripts/analyze.sh sales.csv. - The scripts are already in the sandbox (written during step 2), so they execute.
###Key Takeaway
Skills are essentially a packaging convention + lazy-loading mechanism:
| Concern | How it works |
|---|---|
| Getting files into the sandbox | createSkillTool collects them → passes as files → createBashTool writes them (same mechanism as your { "index.ts": "..." } example) |
| Telling the LLM what's available | instructions string + the skill tool's description lists all skills |
| Loading skill instructions | The skill tool reads SKILL.md from the local disk (not the sandbox) and returns the body to the LLM on demand — this is lazy so the LLM only gets instructions for skills it actually needs |
| Running skill scripts | The LLM uses the regular bash tool to execute scripts that are already in the sandbox |
So skills don't have any special sandbox integration — they just provide a structured way to bundle files + instructions and funnel them through the same createBashTool({ files }) pipeline you were already looking at.
when you say
Scans skillsDirectory for subdirectories containing SKILL.md files (discoverSkills).
are these subdirs in the sandbox? or in the application server?
Great clarification question. The answer is clear from the code:
On the application server (local filesystem), not in the sandbox.
Looking at discoverSkills in src/skills/parser.ts:
export async function discoverSkills(
options: DiscoverSkillsOptions,
): Promise<DiscoveredSkill[]> {
const { skillsDirectory, sandboxDestination } = options;
const absoluteDir = path.resolve(skillsDirectory); // ← resolves against local fs
let entries: string[];
try {
entries = await fs.readdir(absoluteDir); // ← node:fs, not sandbox.readFile
} catch (error) { ... }
It uses node:fs/promises — the Node.js filesystem APIs — to readdir, stat, and readFile the skill directories. There's no sandbox involved at this stage at all.
Same story for the skill tool at runtime (src/tools/skill.ts lines 64–70):
execute: async ({ skillName }) => {
const skillMdPath = path.join(skill.localPath, "SKILL.md");
const content = await fs.readFile(skillMdPath, "utf-8"); // ← local fs again
When the LLM calls skill({ skillName: "csv" }), it reads SKILL.md from skill.localPath — which is the absolute path on your server where the skill was originally discovered.
So the full picture is:
| Operation | Where it happens |
|---|---|
Discovering skills (discoverSkills) | Local filesystem (your server) |
Reading SKILL.md instructions (the skill tool) | Local filesystem (your server) |
| Writing skill scripts so they can be executed | Into the sandbox (via createBashTool({ files })) |
Running skill scripts (bash ./skills/csv/scripts/analyze.sh) | In the sandbox |
END OF POST