Agents

Your agent deserves a production API.

You've built the agent. Reminix gives it an API, auth, streaming, monitoring, and deployment — so you can focus on the logic that matters.

Call from

Next.jsReact NativeSlack BotsInternal ToolsAny HTTP Client

This is all it takes.

Define your agent with a name, a schema, and a handler. Use any LLM, any framework — the internals are entirely yours.

Production REST API

Every agent gets its own endpoint with auth, validation, and error handling.

SSE Streaming

Token-by-token streaming with backpressure and reconnection. Built in.

Schema validation

Requests validated against your input schema. Output schema enforced.

Monitoring

Request tracing, error tracking, latency metrics. No config needed.

agents/support-bot.ts
import OpenAI from "openai"
import { agent } from "@reminix/runtime"

const openai = new OpenAI()

export const supportBot = agent("support-bot", {
  type: "chat",
  description: "Customer support assistant",
  handler: async (input) => {
    const response = await openai.chat
      .completions.create({
        model: "gpt-4o",
        messages: input.messages,
      })
    return response.choices[0].message.content
  },
})

One deploy. Four endpoints.

Every agent is exposed as a set of REST endpoints. Auth, validation, streaming, error handling — all generated from your definition.

POST
/v1/agents/{name}/invoke

One-shot invocation with JSON input/output

POST
/v1/agents/{name}/chat

Send a message in a conversation

POST
/v1/agents/{name}/workflow

Start or resume a workflow run

GET
/v1/agents/{name}

Agent metadata and JSON schemas

Simple tasks. Conversations. Workflows.

Every agent supports invocation by default. Implement predefined types to unlock multi-turn conversations and multi-step workflows.

Invoke

Send input, get output. The default for every agent. Supports streaming.

agent.ts
import { agent, serve } from "@reminix/runtime"

const taskAgent = agent("analyzer", {
  type: "task",
  handler: async (input) => {
    /* ... */
  },
})

serve({ agents: [taskAgent] })

Data processing, extraction, reports, analysis

Conversations

Your agent remembers every message. You write the logic — Reminix manages the session history and message threading.

agent.ts
import { agent, serve } from "@reminix/runtime"

const conversationAgent = agent("support-bot", {
  type: "conversation",
  handler: async (input, ctx) => {
    /* ... */
  },
})

serve({ agents: [conversationAgent] })

Support bots, assistants, research agents

Workflow Runs

Long-running processes that pause, resume, and branch. Built for pipelines and approval flows that span hours or days.

agent.ts
import { agent, serve } from "@reminix/runtime"

const workflowAgent = agent("lead-router", {
  type: "workflow",
  handler: async (input) => {
    /* ... */
  },
})

serve({ agents: [workflowAgent] })

Pipelines, approval flows, multi-agent orchestration

Your agents talk to LLMs. Now they can talk to everything else.

Google Calendar, Slack, GitHub — your agent gets a valid access token. Reminix handles the OAuth flow, token storage, and automatic refresh.

20+ services supported. Any OAuth 2.0 provider works.

using-connections.ts
// Get a fresh token — auto-refreshed
const { access_token } = await client
  .oauthConnections.getToken("google")

// Use Google's own SDK directly
const calendar = google.calendar({
  version: "v3",
  auth: access_token,
})

const events = await calendar.events.list({
  calendarId: "primary",
  timeMin: new Date().toISOString(),
})

Already have an agent? Ship it.

Using LangChain, Vercel AI SDK, or calling OpenAI directly? Wrap it in a Reminix agent in a few lines. No rewrites needed.

LangChain

Python

Vercel AI SDK

TypeScript

OpenAI

Python & TS

Anthropic

Python & TS

Gemini

Python & TS

Custom Code

Python & TS

Deploy your first agent in five minutes.

Free while in early access. No credit card required.

Or read the docs