Streaming

Stream responses using async generators

Stream responses to clients using TypeScript async generators.

The Agent

import { Agent, serve } from '@reminix/runtime';

const agent = new Agent('streamer');

// Streaming invoke handler
agent.onInvokeStream(async function* (input) {
  const text = (input.text as string) ?? 'Hello world';
  const words = text.split(' ');

  for (const word of words) {
    yield { chunk: word + ' ' };
    await sleep(100); // Simulate processing delay
  }
});

// Streaming chat handler
agent.onChatStream(async function* (messages) {
  const lastMessage = messages[messages.length - 1];
  const content = typeof lastMessage?.content === 'string' ? lastMessage.content : 'Hello';

  const response = `I received your message: '${content}'. Let me think...`;

  for (const char of response) {
    yield { chunk: char };
    await sleep(20); // Simulate typing
  }
});

function sleep(ms: number): Promise<void> {
  return new Promise((resolve) => setTimeout(resolve, ms));
}

serve(agent, { port: 8080 });

How Streaming Works

When you register a streaming handler using onInvokeStream or onChatStream, the runtime:

  1. Routes stream: true requests to the streaming handler
  2. Converts chunks to Server-Sent Events (SSE)
  3. Sends data: {"chunk": "..."} for each yielded chunk
  4. Sends data: [DONE] when complete

Handler Routing

Client RequestHandler Used
stream: falseonInvoke / onChat
stream: trueonInvokeStream / onChatStream

Running

npx tsx agent.ts

Testing

# Streaming invoke
curl -X POST http://localhost:8080/agent/streamer/invoke \
    -H "Content-Type: application/json" \
    -d '{"input": {"text": "Hello world"}, "stream": true}'

# Streaming chat
curl -X POST http://localhost:8080/agent/streamer/chat \
    -H "Content-Type: application/json" \
    -d '{"messages": [{"role": "user", "content": "Hi"}], "stream": true}'

SSE Response Format

data: {"chunk": "Hello "}

data: {"chunk": "world "}

data: [DONE]

Next Steps

On this page