How to create chat bubble UI with version 6?

main
4bbb1f6skill: add gateway instructions (#11995)
This post might have stale content, as main is 211 commits ahead.

create a simple chat like ui with version 6, where each turn is in like a chat bubble

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I have enough information to create a simple chat UI with chat bubbles using the Vercel AI SDK v6. Here's a complete example:


##Simple Chat UI with Bubbles (AI SDK v6)

Here's a complete example with a Next.js App Router:

###1. API Route (app/api/chat/route.ts)

import { openai } from '@ai-sdk/openai';
import {
  consumeStream,
  convertToModelMessages,
  streamText,
  UIMessage,
} from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages: await convertToModelMessages(messages),
    abortSignal: req.signal,
  });

  return result.toUIMessageStreamResponse({
    consumeSseStream: consumeStream,
  });
}

###2. Chat Page Component (app/page.tsx)

'use client';

import { useChat } from '@ai-sdk/react';
import { useState } from 'react';

export default function Chat() {
  const { messages, sendMessage, status, stop, error } = useChat();
  const [input, setInput] = useState('');

  const isLoading = status === 'streaming' || status === 'submitted';

  const handleSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    if (!input.trim() || isLoading) return;
    sendMessage({ text: input });
    setInput('');
  };

  return (
    <div className="flex flex-col h-screen bg-gray-100">
      {/* Header */}
      <header className="bg-white shadow-sm p-4">
        <h1 className="text-xl font-semibold text-gray-800">AI Chat</h1>
      </header>

      {/* Messages Container */}
      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.length === 0 && (
          <div className="text-center text-gray-500 mt-8">
            Start a conversation by typing a message below.
          </div>
        )}

        {messages.map((message) => (
          <div
            key={message.id}
            className={`flex ${
              message.role === 'user' ? 'justify-end' : 'justify-start'
            }`}
          >
            <div
              className={`max-w-[70%] rounded-2xl px-4 py-2 ${
                message.role === 'user'
                  ? 'bg-blue-500 text-white rounded-br-md'
                  : 'bg-white text-gray-800 shadow-sm rounded-bl-md'
              }`}
            >
              {/* Render message parts */}
              {message.parts.map((part, index) => {
                if (part.type === 'text') {
                  return (
                    <p key={index} className="whitespace-pre-wrap">
                      {part.text}
                    </p>
                  );
                }
                return null;
              })}
            </div>
          </div>
        ))}

        {/* Loading indicator */}
        {status === 'submitted' && (
          <div className="flex justify-start">
            <div className="bg-white text-gray-500 rounded-2xl rounded-bl-md px-4 py-2 shadow-sm">
              <div className="flex space-x-1">
                <span className="animate-bounce">●</span>
                <span className="animate-bounce" style={{ animationDelay: '0.1s' }}>●</span>
                <span className="animate-bounce" style={{ animationDelay: '0.2s' }}>●</span>
              </div>
            </div>
          </div>
        )}

        {/* Error message */}
        {error && (
          <div className="flex justify-center">
            <div className="bg-red-100 text-red-600 rounded-lg px-4 py-2">
              Error: {error.message}
            </div>
          </div>
        )}
      </div>

      {/* Input Form */}
      <form
        onSubmit={handleSubmit}
        className="bg-white border-t p-4 flex gap-2"
      >
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type a message..."
          disabled={isLoading}
          className="flex-1 border border-gray-300 rounded-full px-4 py-2 focus:outline-none focus:ring-2 focus:ring-blue-500 disabled:bg-gray-100"
        />
        {isLoading ? (
          <button
            type="button"
            onClick={stop}
            className="bg-red-500 text-white rounded-full px-6 py-2 hover:bg-red-600 transition"
          >
            Stop
          </button>
        ) : (
          <button
            type="submit"
            disabled={!input.trim()}
            className="bg-blue-500 text-white rounded-full px-6 py-2 hover:bg-blue-600 transition disabled:opacity-50 disabled:cursor-not-allowed"
          >
            Send
          </button>
        )}
      </form>
    </div>
  );
}

###3. Global Styles (optional Tailwind additions in app/globals.css)

@tailwind base;
@tailwind components;
@tailwind utilities;

@keyframes bounce {
  0%, 60%, 100% {
    transform: translateY(0);
  }
  30% {
    transform: translateY(-4px);
  }
}

.animate-bounce {
  animation: bounce 1.4s infinite;
}

##Key v6 Changes Highlighted:

  1. useChat import - Now from @ai-sdk/react (not ai/react)

  2. sendMessage({ text }) - Replaces the old handleSubmit + input pattern. You pass an object with text.

  3. status - Can be 'ready', 'submitted', or 'streaming'. Use this instead of the old isLoading boolean.

  4. message.parts - Messages now use a parts array instead of a content string. Each part has a type (e.g., 'text', 'reasoning', 'tool-invocation', etc.)

  5. convertToModelMessages - Server-side helper to convert UIMessage[] to model messages.

  6. toUIMessageStreamResponse - New method name (was toDataStreamResponse)

This gives you a clean, modern chat interface with:

  • User messages on the right (blue bubbles)
  • AI messages on the left (white bubbles)
  • Animated loading dots
  • Stop button during streaming
  • Error handling
Avatar of julianbenegas
Julian Benegas
commented

if i want to render the name/role before each turn? isn't it weird that UIMessage is like an array and there's no notion of "turn"?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

You're absolutely right - there's no explicit "turn" concept in the SDK. The messages array is just a flat list of UIMessage objects, each with a role. This is a common pattern in chat APIs (OpenAI, Anthropic, etc. all use flat message arrays), but it does make UI rendering a bit manual.

##Rendering with Role/Name Labels

For displaying names/roles, you simply use message.role directly:

{messages.map((message) => (
  <div
    key={message.id}
    className={`flex flex-col ${
      message.role === 'user' ? 'items-end' : 'items-start'
    }`}
  >
    {/* Role/Name Label */}
    <span className="text-xs text-gray-500 mb-1 px-2">
      {message.role === 'user' ? 'You' : 'Assistant'}
    </span>
    
    {/* Chat Bubble */}
    <div
      className={`max-w-[70%] rounded-2xl px-4 py-2 ${
        message.role === 'user'
          ? 'bg-blue-500 text-white rounded-br-md'
          : 'bg-white text-gray-800 shadow-sm rounded-bl-md'
      }`}
    >
      {message.parts.map((part, index) => {
        if (part.type === 'text') {
          return <p key={index} className="whitespace-pre-wrap">{part.text}</p>;
        }
        return null;
      })}
    </div>
  </div>
))}

##Why No "Turn" Abstraction?

The flat array design is intentional for a few reasons:

  1. API compatibility - LLM APIs (OpenAI, Anthropic, etc.) all expect flat message arrays
  2. Flexibility - Not all conversations are strict user→assistant alternations (think: system messages, tool calls, multi-step reasoning)
  3. Simplicity - The SDK stays close to the wire format

##If You Want Turns

You could create a helper to group messages into turns:

type Turn = {
  user: UIMessage | null;
  assistant: UIMessage | null;
};

function groupIntoTurns(messages: UIMessage[]): Turn[] {
  const turns: Turn[] = [];
  let currentTurn: Turn = { user: null, assistant: null };

  for (const message of messages) {
    if (message.role === 'user') {
      // Start a new turn
      if (currentTurn.user !== null) {
        turns.push(currentTurn);
      }
      currentTurn = { user: message, assistant: null };
    } else if (message.role === 'assistant') {
      currentTurn.assistant = message;
      turns.push(currentTurn);
      currentTurn = { user: null, assistant: null };
    }
  }

  // Push any incomplete turn
  if (currentTurn.user || currentTurn.assistant) {
    turns.push(currentTurn);
  }

  return turns;
}

Then render:

{groupIntoTurns(messages).map((turn, i) => (
  <div key={i} className="space-y-2 mb-6">
    {turn.user && <MessageBubble message={turn.user} />}
    {turn.assistant && <MessageBubble message={turn.assistant} />}
  </div>
))}

But honestly, for most chat UIs, the flat array works fine - you just render each message with its role label. The "turn" concept is more of a UX/visual concern than a data model one.

Avatar of julianbenegas
Julian Benegas
commented
{messages.map((message) => (
  <div
    key={message.id}
    className={`flex flex-col ${
      message.role === 'user' ? 'items-end' : 'items-start'
    }`}
  >
    {/* Role/Name Label */}
    <span className="text-xs text-gray-500 mb-1 px-2">
      {message.role === 'user' ? 'You' : 'Assistant'}
    </span>
    
    {/* Chat Bubble */}
    <div
      className={`max-w-[70%] rounded-2xl px-4 py-2 ${
        message.role === 'user'
          ? 'bg-blue-500 text-white rounded-br-md'
          : 'bg-white text-gray-800 shadow-sm rounded-bl-md'
      }`}
    >
      {message.parts.map((part, index) => {
        if (part.type === 'text') {
          return <p key={index} className="whitespace-pre-wrap">{part.text}</p>;
        }
        return null;
      })}
    </div>
  </div>
))}

this is fine, but i think that if i build an "agent", then i'll have many messages coming in from the assistant? am i misunderstanding?

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

This is very helpful! Here's the key insight:

##You're Right - But It's Different Than You Think

For agents with multi-step tool calls, you DON'T get multiple assistant messages. Instead, you get one assistant message with multiple parts, including step-start parts that mark step boundaries.

Look at this structure from the tests:

{
  role: 'assistant',
  parts: [
    { type: 'step-start' },
    { type: 'text', text: 'i am gonna use tool1' },
    { type: 'tool-screenshot', state: 'output-available', input: {...}, output: 'result-1' },
    { type: 'step-start' },  // <-- Step boundary!
    { type: 'text', text: 'i am gonna use tool2 and tool3' },
    { type: 'tool-screenshot', ... },
    { type: 'tool-screenshot', ... },
    { type: 'step-start' },  // <-- Another step boundary!
    { type: 'tool-screenshot', ... },
    { type: 'step-start' },
    { type: 'text', text: 'final response' },
  ],
}

So the model is:

  • One user messageOne assistant message (per turn)
  • But the assistant message can have many parts representing multiple steps

The example from the SDK does this to only show the role when it changes:

// used to only render the role when it changes:
let lastRole: string | undefined = undefined;

{messages?.map(m => {
  const isNewRole = m.role !== lastRole;
  lastRole = m.role;

  return (
    <div key={m.id}>
      {isNewRole && <strong>{`${m.role}: `}</strong>}
      {/* render parts... */}
    </div>
  );
})}

##For Your Bubble UI

The simple bubble code I showed earlier should work fine because each message IS a single turn:

{messages.map((message) => (
  <div
    key={message.id}
    className={`flex flex-col ${
      message.role === 'user' ? 'items-end' : 'items-start'
    }`}
  >
    <span className="text-xs text-gray-500 mb-1 px-2">
      {message.role === 'user' ? 'You' : 'Assistant'}
    </span>
    
    <div className={`bubble ${message.role === 'user' ? 'user-bubble' : 'assistant-bubble'}`}>
      {message.parts.map((part, index) => {
        switch (part.type) {
          case 'text':
            return <p key={index}>{part.text}</p>;
          case 'step-start':
            // Optionally render a visual divider between steps
            return index > 0 ? <hr key={index} className="my-2 border-gray-200" /> : null;
          case 'tool-*':
            // Render tool UI...
          default:
            return null;
        }
      })}
    </div>
  </div>
))}

The only case where you'd see consecutive assistant messages is if you're manually adding them or using some special persistence pattern. In normal useChat flow, it's always alternating user → assistant → user → assistant.

Avatar of julianbenegas
Julian Benegas
commented

you said:

The example from the SDK does this to only show the role when it changes:

// used to only render the role when it changes:
let lastRole: string | undefined = undefined;

{messages?.map(m => {
  const isNewRole = m.role !== lastRole;
  lastRole = m.role;

  return (
    <div key={m.id}>
      {isNewRole && <strong>{`${m.role}: `}</strong>}
      {/* render parts... */}
    </div>
  );
})}

so why would it need to do that if this would be true?

One user message → One assistant message (per turn)

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Only one example uses this pattern. Honestly, you're right to question this. Based on my investigation:

##The Truth

  1. In normal useChat flow: You will NOT get consecutive messages with the same role. The SDK explicitly handles this - when streaming a response, if the last message is already assistant, it appends parts to it rather than creating a new message.

  2. The lastRole pattern in that example is likely unnecessary - or at least, it's defensive coding for edge cases that don't typically occur in standard usage.

  3. When COULD you get consecutive same-role messages?

    • If you manually call setMessages() with malformed data
    • If you load initialMessages from a database that somehow has consecutive same-role messages
    • If there's a bug
    • Maybe some edge case with system messages (which the SDK says to avoid)

So for a simple chat bubble UI, you don't need the lastRole tracking. The simpler code I showed originally is fine:

{messages.map((message) => (
  <div key={message.id} className={...}>
    <span>{message.role === 'user' ? 'You' : 'Assistant'}</span>
    <div className="bubble">
      {message.parts.map(...)}
    </div>
  </div>
))}

That example's comment was either overly defensive, copy-pasted from somewhere, or handling a case that the developer wasn't sure about. Good catch!


END OF POST

How to create chat bubble UI with version 6? — vercel/ai