Auth, streaming, validation, rate limiting, CORS, monitoring — that's weeks of infra work. Or one serve() call with Reminix.
A schema and a handler. Use whatever LLM or framework you want — Reminix never touches your agent's internals.
import OpenAI from "openai"
import { agent } from "@reminix/runtime"
const openai = new OpenAI()
export const supportBot = agent("support-bot", {
type: "chat",
description: "Customer support assistant",
handler: async (input) => {
const response = await openai.chat
.completions.create({
model: "gpt-4o",
messages: input.messages,
})
return response.choices[0].message.content
},
})Reminix generates production endpoints from your agent definition.
/v1/agents/{name}/invoke/v1/agents/{name}/chat/v1/agents/{name}/workflow/v1/agents/{name}Invoke agents, stream responses, and run multi-turn conversations directly from the CLI. Pipe output into scripts, CI/CD pipelines, or other AI agents.
Not every agent is a chatbot. Reminix supports all three patterns out of the box.
Send input, get output. Supports streaming.
Data processing, extraction, analysis
Multi-turn with managed sessions.
Support bots, assistants, research
Stateful processes that pause and resume.
Pipelines, approvals, orchestration
Google Calendar, Slack, GitHub, Notion — Reminix handles the full OAuth flow. Your code just gets a valid token.
// Get a fresh token — auto-refreshed
const { access_token } = await client
.oauthConnections.getToken("google")
// Use Google's own SDK directly
const calendar = google.calendar({
version: "v3",
auth: access_token,
})
const events = await calendar.events.list({
calendarId: "primary",
timeMin: new Date().toISOString(),
})LangChain, Vercel AI SDK, OpenAI, Anthropic, Gemini, or custom code. Wrap it in a Reminix agent and deploy.
Ready-to-use agent templates for the most common use cases. Fork, deploy, and call the REST API from any app.