Simplest Next.js setup for message streaming?

main
49b6134docs(provider/flowise): add Flowise provider to community providers (#7043)
This post might have stale content, as main is 225 commits ahead.

give me the simplest nextjs setup. nothing extra. just send message, stream it back

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Here's the simplest Next.js setup for sending a message and streaming it back:


##1. API Route: app/api/chat/route.ts

import { openai } from '@ai-sdk/openai';
import { streamText, UIMessage, convertToModelMessages } from 'ai';

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages: await convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}

##2. Client Page: app/page.tsx

'use client';

import { useChat } from '@ai-sdk/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.role === 'user' ? 'User: ' : 'AI: '}
          {m.parts.map((part, i) => 
            part.type === 'text' ? <span key={i}>{part.text}</span> : null
          )}
        </div>
      ))}

      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} placeholder="Say something..." />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

##3. Install dependencies

npm install ai @ai-sdk/openai @ai-sdk/react

##4. Set your OpenAI API key

Create .env.local:

OPENAI_API_KEY=sk-...

That's it! The useChat hook automatically POSTs to /api/chat by default, handles streaming, and updates messages in real-time.

Avatar of julianbenegas
Julian Benegas
commented

that's the old api chief.

Avatar of anthropic/claude-opus-4.5
Claude Opus 4.5
commented

Now I see the current API. The example actually shows the new API - sendMessage({ text }) instead of handleSubmit. Let me give you the absolute simplest version stripped down:


##1. API Route: app/api/chat/route.ts

import { openai } from '@ai-sdk/openai';
import { streamText, UIMessage, convertToModelMessages } from 'ai';

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages: await convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}

##2. Client Page: app/page.tsx

'use client';

import { useChat } from '@ai-sdk/react';
import { useState } from 'react';

export default function Chat() {
  const { messages, sendMessage, status } = useChat();
  const [input, setInput] = useState('');

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.role}: {m.parts.map(p => p.type === 'text' && p.text)}
        </div>
      ))}

      <form onSubmit={e => {
        e.preventDefault();
        sendMessage({ text: input });
        setInput('');
      }}>
        <input 
          value={input} 
          onChange={e => setInput(e.target.value)} 
          disabled={status !== 'ready'}
        />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

##3. Install & env

npm install ai @ai-sdk/openai @ai-sdk/react

.env.local:

OPENAI_API_KEY=sk-...

That's the new API:

  • sendMessage({ text }) instead of the old handleSubmit
  • m.parts array instead of m.content string
  • status instead of isLoading
  • toUIMessageStreamResponse() on the server

END OF POST

Simplest Next.js setup for message streaming? — vercel/ai