Streaming
Stream responses using async generators
Stream responses to clients using TypeScript async generators.
The Agent
import { Agent, serve } from '@reminix/runtime';
const agent = new Agent('streamer');
// Streaming invoke handler
agent.onInvokeStream(async function* (input) {
const text = (input.text as string) ?? 'Hello world';
const words = text.split(' ');
for (const word of words) {
yield { chunk: word + ' ' };
await sleep(100); // Simulate processing delay
}
});
// Streaming chat handler
agent.onChatStream(async function* (messages) {
const lastMessage = messages[messages.length - 1];
const content = typeof lastMessage?.content === 'string' ? lastMessage.content : 'Hello';
const response = `I received your message: '${content}'. Let me think...`;
for (const char of response) {
yield { chunk: char };
await sleep(20); // Simulate typing
}
});
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
serve(agent, { port: 8080 });How Streaming Works
When you register a streaming handler using onInvokeStream or onChatStream, the runtime:
- Routes
stream: truerequests to the streaming handler - Converts chunks to Server-Sent Events (SSE)
- Sends
data: {"chunk": "..."}for each yielded chunk - Sends
data: [DONE]when complete
Handler Routing
| Client Request | Handler Used |
|---|---|
stream: false | onInvoke / onChat |
stream: true | onInvokeStream / onChatStream |
Running
npx tsx agent.tsTesting
# Streaming invoke
curl -X POST http://localhost:8080/agent/streamer/invoke \
-H "Content-Type: application/json" \
-d '{"input": {"text": "Hello world"}, "stream": true}'
# Streaming chat
curl -X POST http://localhost:8080/agent/streamer/chat \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Hi"}], "stream": true}'SSE Response Format
data: {"chunk": "Hello "}
data: {"chunk": "world "}
data: [DONE]Next Steps
- Basic Agent - Simple non-streaming agent
- Multi-Agent - Run multiple agents