How did they implement background agent access?

devv1.1.18
c87939arelease: v1.1.18
This post might have stale content, as dev is 2006 commits ahead.

how did they implement this with opencode, if you should imagine? https://builders.ramp.com/post/why-we-built-our-background-agent

skills? plugins? MCP? i mean like the fact that they have sentry, etc access. how do they auth?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have a clear picture. Let me summarize how Ramp implemented this with OpenCode:


##How Ramp Built "Inspect" with OpenCode

Based on the blog post and the OpenCode codebase, here's how they likely integrated Sentry, Datadog, LaunchDarkly, etc.:

###1. MCP Servers (Primary Method for External Services)

For services like Sentry, they use MCP (Model Context Protocol) with OAuth authentication. OpenCode has built-in support for this:

// opencode.json
{
  "mcp": {
    "sentry": {
      "type": "remote",
      "url": "https://mcp.sentry.dev/mcp",
      "oauth": {}
    }
  }
}

Authentication flow:

  • Run opencode mcp auth sentry
  • Opens browser for OAuth flow
  • Tokens are stored in ~/.local/share/opencode/mcp-auth.json
  • Tokens get auto-refreshed via the McpOAuthProvider class (in packages/opencode/src/mcp/oauth-provider.ts)

The MCP OAuth implementation handles:

  • Dynamic client registration
  • Token refresh
  • Code verifier (PKCE)
  • State parameter for security

###2. Custom Tools via Plugins

For internal services or custom integrations (Datadog, LaunchDarkly, Buildkite, etc.), they'd write plugins:

// .opencode/plugin/datadog.ts
import { tool } from "@opencode-ai/plugin"

export const DatadogPlugin = async ({ $, client }) => {
  return {
    tool: {
      datadog_query: tool({
        description: "Query Datadog metrics and logs",
        args: {
          query: tool.schema.string(),
          timeRange: tool.schema.string().optional(),
        },
        async execute(args, ctx) {
          // Use env vars or stored credentials
          const result = await $`dd-cli query ${args.query}`.text()
          return result
        }
      })
    }
  }
}

###3. Authentication Approaches

Looking at the plugin system, there are multiple auth methods:

a) Plugin auth hook (for LLM providers, but pattern works for services):

// From packages/plugin/src/index.ts
auth: {
  provider: "my-service",
  methods: [
    {
      type: "oauth",
      label: "Sign in with MyService",
      authorize(inputs) {
        // Return OAuth URL and callback
        return { url: "...", instructions: "...", method: "auto", callback: () => {...} }
      }
    },
    {
      type: "api",
      label: "Use API key",
      prompts: [{ type: "text", key: "apiKey", message: "Enter API key" }],
      authorize(inputs) {
        return { type: "success", key: inputs.apiKey }
      }
    }
  ]
}

b) Environment variables - For background agents like Ramp's:

// In Modal sandbox, they'd inject secrets as env vars
{
  "headers": {
    "Authorization": "Bearer {env:DATADOG_API_KEY}"
  }
}

c) Per-user tokens via GitHub OAuth - From the blog:

"Consider using GitHub authentication if your code lives in GitHub, as this will give you a user token that you can then use to open a pull request on behalf of the user."

###4. Skills for Internal Workflows

<!-- .opencode/skills/deploy/SKILL.md -->
---
name: deploy
description: Deploy code to production following Ramp's deployment process
---

## Steps
1. Run tests: `bazel test //...`
2. Check feature flags via LaunchDarkly
3. Create deploy PR
4. Monitor in Datadog

###5. The Server-First Architecture

This is key to how Ramp scales it. OpenCode runs as a server:

// From packages/sdk/js/src/server.ts
spawn(`opencode`, [`serve`, `--hostname=127.0.0.1`, `--port=4096`])

Then multiple clients (Slack, web, Chrome extension) all talk to the same OpenCode server via the SDK:

import { createOpencodeClient } from "@opencode-ai/sdk"

const client = createOpencodeClient({
  baseUrl: "http://localhost:4096"
})

// Start a session, send messages, etc.
await client.session.create(...)
await client.chat.send(...)

###TL;DR for Ramp's Approach

IntegrationMethodAuth
SentryMCP server (https://mcp.sentry.dev/mcp)OAuth via opencode mcp auth
DatadogCustom plugin tool OR MCPAPI key in env vars
LaunchDarklyCustom plugin toolAPI key
GitHubBuilt-in + user OAuth tokenOAuth for PRs on user's behalf
BuildkiteCustom plugin toolAPI key or webhook
SlackClient to OpenCode serverBot token + user context

The key insight is that OpenCode's plugin system (tool hook) lets you add arbitrary tools that can call any API, while MCP servers provide a standardized way to connect to services that support the protocol (with built-in OAuth). For a background agent like Ramp's, they'd pre-configure credentials in the Modal sandbox environment.

Avatar of julianbenegas
Julian Benegas
commented

interesting. so there's a single opencode server serving all users. that means that for MCP, there's a single authentication (header/token) for all users, right? then the user-specific things should be with the auth thing in plugins?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have the complete picture. The key insight is in the blog post:

"Each session runs in a sandboxed VM on Modal"

This is crucial. Let me explain:


##You're Right - It's NOT a Single Shared Server

Based on the blog post architecture, Ramp runs one OpenCode server per session/sandbox, not a shared server for all users. Here's how it works:

###Architecture

┌─────────────────────────────────────────────────────────────────┐
│                     Ramp's Central API                          │
│            (Cloudflare Durable Objects per session)             │
│  - Stores session state, SQLite per session                     │
│  - Routes requests to appropriate Modal sandbox                 │
│  - Handles user authentication (GitHub OAuth)                   │
└──────────────────────────┬──────────────────────────────────────┘
                           │
        ┌──────────────────┼──────────────────┐
        │                  │                  │
        ▼                  ▼                  ▼
┌───────────────┐  ┌───────────────┐  ┌───────────────┐
│ Modal Sandbox │  │ Modal Sandbox │  │ Modal Sandbox │
│  (Session A)  │  │  (Session B)  │  │  (Session C)  │
│               │  │               │  │               │
│ ┌───────────┐ │  │ ┌───────────┐ │  │ ┌───────────┐ │
│ │ OpenCode  │ │  │ │ OpenCode  │ │  │ │ OpenCode  │ │
│ │  Server   │ │  │ │  Server   │ │  │ │  Server   │ │
│ └───────────┘ │  │ └───────────┘ │  │ └───────────┘ │
│               │  │               │  │               │
│ Pre-injected: │  │ Pre-injected: │  │ Pre-injected: │
│ - SENTRY_TOKEN│  │ - SENTRY_TOKEN│  │ - SENTRY_TOKEN│
│ - DD_API_KEY  │  │ - DD_API_KEY  │  │ - DD_API_KEY  │
│ - User context│  │ - User context│  │ - User context│
└───────────────┘  └───────────────┘  └───────────────┘

###How Auth Works for Sentry/Datadog/etc.

Two approaches:

####1. Shared Service Tokens (org-wide) - Most Likely for Sentry/Datadog

For observability tools, they probably use a single org-wide service token:

// In Modal sandbox image build or at session start
process.env.SENTRY_AUTH_TOKEN = "org-level-token"  // Same for all sandboxes
process.env.DATADOG_API_KEY = "shared-api-key"
process.env.LAUNCHDARKLY_SDK_KEY = "shared-sdk-key"

Then in OpenCode config:

{
  "mcp": {
    "sentry": {
      "type": "remote", 
      "url": "https://mcp.sentry.dev/mcp",
      "headers": {
        "Authorization": "Bearer {env:SENTRY_AUTH_TOKEN}"
      }
    }
  }
}

This works because:

  • Sentry/Datadog/LaunchDarkly are read-mostly for debugging
  • Everyone at Ramp should see the same error data anyway
  • It's an internal tool - no need for per-user access control to error logs

####2. Per-User Tokens - For GitHub PRs

For user-specific actions like creating PRs, the blog explicitly says:

"Consider using GitHub authentication... this will give you a user token that you can then use to open a pull request on behalf of the user."

"Our setup is to have the sandbox push the changes... then send an event to the API with the branch name and session ID. The API will then use the user's GitHub token to call GitHub's pull request API."

So their flow is:

  1. User logs in via GitHub OAuth to Ramp's API
  2. Ramp's API stores the user's GitHub token
  3. When OpenCode in the sandbox wants to create a PR, it calls back to Ramp's API
  4. Ramp's API uses that specific user's token to create the PR

This is likely done via a custom plugin tool, not MCP:

// .opencode/plugin/github-pr.ts
export const GitHubPRPlugin = async ({ client }) => {
  return {
    tool: {
      create_pull_request: tool({
        description: "Create a PR for the current changes",
        args: {
          title: tool.schema.string(),
          body: tool.schema.string(),
        },
        async execute(args, ctx) {
          // Call back to Ramp's central API, which has the user's GitHub token
          const response = await fetch("https://inspect.ramp.com/api/pr", {
            method: "POST",
            headers: {
              "Authorization": `Bearer ${process.env.SESSION_TOKEN}`,
              "Content-Type": "application/json"
            },
            body: JSON.stringify({
              sessionId: ctx.sessionID,
              title: args.title,
              body: args.body,
              branch: getCurrentBranch()
            })
          })
          return await response.text()
        }
      })
    }
  }
}

###The Key Insight

OpenCode's Instance.state() pattern means MCP auth is per-process (stored in ~/.local/share/opencode/mcp-auth.json). Since each Modal sandbox:

  • Is a separate VM/container
  • Has its own filesystem
  • Runs its own OpenCode server

...the MCP auth tokens are per-sandbox, not shared across users.

For shared service credentials (Sentry, Datadog), they're baked into the sandbox image or injected at startup as env vars.

For user-specific auth (GitHub), they bypass OpenCode's MCP auth entirely and use their own API layer that knows which user started the session.


END OF POST

How did they implement background agent access? — anomalyco/opencode