You've built the agent. Reminix gives it an API, auth, streaming, monitoring, and deployment — so you can focus on the logic that matters.
Call from
Define your agent with a name, a schema, and a handler. Use any LLM, any framework — the internals are entirely yours.
Every agent gets its own endpoint with auth, validation, and error handling.
Token-by-token streaming with backpressure and reconnection. Built in.
Requests validated against your input schema. Output schema enforced.
Request tracing, error tracking, latency metrics. No config needed.
import OpenAI from "openai"
import { agent } from "@reminix/runtime"
const openai = new OpenAI()
export const supportBot = agent("support-bot", {
type: "chat",
description: "Customer support assistant",
handler: async (input) => {
const response = await openai.chat
.completions.create({
model: "gpt-4o",
messages: input.messages,
})
return response.choices[0].message.content
},
})Every agent is exposed as a set of REST endpoints. Auth, validation, streaming, error handling — all generated from your definition.
/v1/agents/{name}/invokeOne-shot invocation with JSON input/output
/v1/agents/{name}/chatSend a message in a conversation
/v1/agents/{name}/workflowStart or resume a workflow run
/v1/agents/{name}Agent metadata and JSON schemas
Every agent supports invocation by default. Implement predefined types to unlock multi-turn conversations and multi-step workflows.
Send input, get output. The default for every agent. Supports streaming.
import { agent, serve } from "@reminix/runtime"
const taskAgent = agent("analyzer", {
type: "task",
handler: async (input) => {
/* ... */
},
})
serve({ agents: [taskAgent] })Data processing, extraction, reports, analysis
Your agent remembers every message. You write the logic — Reminix manages the session history and message threading.
import { agent, serve } from "@reminix/runtime"
const conversationAgent = agent("support-bot", {
type: "conversation",
handler: async (input, ctx) => {
/* ... */
},
})
serve({ agents: [conversationAgent] })Support bots, assistants, research agents
Long-running processes that pause, resume, and branch. Built for pipelines and approval flows that span hours or days.
import { agent, serve } from "@reminix/runtime"
const workflowAgent = agent("lead-router", {
type: "workflow",
handler: async (input) => {
/* ... */
},
})
serve({ agents: [workflowAgent] })Pipelines, approval flows, multi-agent orchestration
Google Calendar, Slack, GitHub — your agent gets a valid access token. Reminix handles the OAuth flow, token storage, and automatic refresh.
20+ services supported. Any OAuth 2.0 provider works.
// Get a fresh token — auto-refreshed
const { access_token } = await client
.oauthConnections.getToken("google")
// Use Google's own SDK directly
const calendar = google.calendar({
version: "v3",
auth: access_token,
})
const events = await calendar.events.list({
calendarId: "primary",
timeMin: new Date().toISOString(),
})Using LangChain, Vercel AI SDK, or calling OpenAI directly? Wrap it in a Reminix agent in a few lines. No rewrites needed.
Python
TypeScript
Python & TS
Python & TS
Python & TS
Python & TS
Free while in early access. No credit card required.
Or read the docs