Can I use vercel/ai's createUIMessageStreamResponse with oRPC?

HEAD
ff2168fchore: update dependencies and pm (#1380)
This post might have stale content, as HEAD is 11 commits ahead.

can i use vercel/ai 's createUIMessageStreamResponse using oRPC? and the OpenAPIHandler? it uses sse i think

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have a complete picture. Let me provide you with a comprehensive answer:


##Answer: Using Vercel AI SDK's createUIMessageStreamResponse with oRPC

Short answer: Not directly, but oRPC has built-in support for AI SDK that works similarly.

###Why createUIMessageStreamResponse isn't directly compatible

The Vercel AI SDK's createUIMessageStreamResponse returns a raw Response object with a specific SSE format that the AI SDK client (like useChat) expects. This is fundamentally different from how oRPC handles responses:

  1. oRPC's Event Iterator Pattern: oRPC uses its own SSE protocol where you return an async generator (Event Iterator) from your handler, and oRPC serializes it with events like message, done, error.

  2. AI SDK's Protocol: The AI SDK uses its own data stream protocol with specific event types and headers (including x-vercel-ai-ui-message-stream: v1).

oRPC provides built-in AI SDK integration via @orpc/server (or @orpc/ai-sdk). Here's how it works:

####Server Side

import { os, streamToEventIterator } from '@orpc/server'
import { streamText, UIMessage, convertToModelMessages } from 'ai'
import { google } from '@ai-sdk/google'

export const chat = os
  .input(type<{ chatId: string, messages: UIMessage[] }>())
  .handler(async ({ input }) => {
    const result = streamText({
      model: google('gemini-1.5-flash'),
      system: 'You are a helpful assistant.',
      messages: await convertToModelMessages(input.messages),
    })

    // Convert the AI SDK stream to oRPC's event iterator
    return streamToEventIterator(result.toUIMessageStream())
  })

####Client Side

import { useChat } from '@ai-sdk/react'
import { eventIteratorToUnproxiedDataStream } from '@orpc/client'

export function Chat() {
  const { messages, sendMessage, status } = useChat({
    transport: {
      async sendMessages(options) {
        // Convert oRPC's event iterator back to AI SDK stream
        return eventIteratorToUnproxiedDataStream(await client.chat({
          chatId: options.chatId,
          messages: options.messages,
        }, { signal: options.abortSignal }))
      },
      reconnectToStream(options) {
        throw new Error('Unsupported')
      },
    },
  })
  // ... rest of your component
}

###This Works with OpenAPIHandler Too!

Yes, this works with OpenAPIHandler. Under the hood:

  1. When you return an async iterator from your handler, oRPC detects it and sets Content-Type: text/event-stream (source)
  2. The OpenAPI generator correctly documents it as text/event-stream responses (source)
  3. The response is streamed using standard SSE format

###Key Helper Functions

FunctionPackagePurpose
streamToEventIterator@orpc/server or @orpc/ai-sdkConvert AI SDK stream → oRPC event iterator
eventIteratorToStream@orpc/client or @orpc/ai-sdkConvert oRPC event iterator → ReadableStream
eventIteratorToUnproxiedDataStream@orpc/client or @orpc/ai-sdkSame as above but unproxies data (required for AI SDK's structuredClone)

###Important Note

Use eventIteratorToUnproxiedDataStream instead of eventIteratorToStream because the AI SDK internally uses structuredClone, which doesn't support proxied data. oRPC may proxy events for metadata, so you need to unproxy before passing to AI SDK.

Avatar of julianbenegas
Julian Benegas
commented

if the client needs to do something different other than pointing useChat to the correct endpoint, it's trash.

The response is streamed using standard SSE format

so why would it need the weird client side helper?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Ah, now I see the real issue. The protocols are completely different:

##oRPC SSE Format:

event: message
data: {"order": 1}

event: done
data: {"order": 3}

##AI SDK v5+ SSE Format (Data Stream Protocol):

data: {"type":"message-start","value":{"messageId":"..."}}

data: {"type":"text-start","value":{"id":"..."}}

data: {"type":"text-delta","value":{"id":"...","delta":"Hello"}}

data: {"type":"text-end","value":{"id":"..."}}

data: {"type":"finish-message","value":{...}}

data: [DONE]

The AI SDK has its own proprietary protocol with specific message types like message-start, text-delta, text-end, finish-message, etc., plus it requires a special header x-vercel-ai-ui-message-stream: v1.

You're right to be frustrated. oRPC uses standard SSE with event: message/done/error and JSON data. The AI SDK useChat expects a completely different protocol structure that's specific to Vercel.

So no, you cannot just point useChat at an oRPC endpoint and have it work - the wire formats are fundamentally incompatible. That's why oRPC's docs show the conversion helpers.

The client-side helper is needed because:

  1. oRPC's client receives the stream and converts it to an async iterator (oRPC's protocol)
  2. eventIteratorToUnproxiedDataStream converts that back to a ReadableStream
  3. The AI SDK's transport option then reads that stream

It's a workaround for two different SSE protocols talking past each other. If you want zero client-side changes, you'd need oRPC to natively emit the AI SDK's proprietary format - which it doesn't (and probably shouldn't, since it's Vercel-specific).

Avatar of julianbenegas
Julian Benegas
commented

is there an escape hatch in oRPC to support a raw response?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Based on my research, there is no escape hatch in oRPC to return a raw Response directly from a procedure handler. The handler's return value always goes through oRPC's codec/serialization layer.

##Your Options

###1. Bypass oRPC entirely for that route

Since you control the routing, just don't route AI chat through oRPC:

// app/api/chat/route.ts (Next.js example)
import { createUIMessageStreamResponse } from 'ai'

export async function POST(request: Request) {
  // Handle directly with AI SDK - no oRPC involved
  return createUIMessageStreamResponse({
    // ...your AI SDK config
  })
}

Then use oRPC for everything else. This is the cleanest solution.

###2. Use outputStructure: 'detailed' (doesn't help here)

This lets you control status/headers/body, but the body still goes through oRPC's serialization - it won't let you return a raw streaming Response with a custom SSE format.

###3. Feature request?

oRPC doesn't currently support returning a raw Response that bypasses its codec. This would be a reasonable feature request - something like:

// Hypothetical API
const chat = os.handler(async () => {
  return Response.raw(createUIMessageStreamResponse({...}))
})

Bottom line: For Vercel AI SDK's useChat to work out of the box (just pointing at an endpoint), you need to serve the AI SDK's proprietary SSE format directly. oRPC can't do that today, so your best bet is to handle that specific endpoint outside of oRPC.


END OF POST

Can I use vercel/ai's createUIMessageStreamResponse with oRPC? — middleapi/orpc