Agents
Agents are the core building block in Reminix. Create an agent, wire up tools, and call it via the/invoke endpoint.
Two Paths to Production
Reminix gives you flexibility without complexity. Choose the path that fits your needs.Configure via UI
For quick resultsDefine agents through forms (Dashboard or CLI). No code required.
- Configure system prompts, model, and settings
- Wire up platform tools (web, memory, knowledge, storage)
- Instant deployment
- Type:
managed
Deploy Custom Code
For full controlWrite Python or TypeScript. Discovered automatically on deploy.
- Full control over implementation
- Use any framework (LangChain, Vercel AI, etc.)
- Build custom tools for domain-specific logic
- Type:
python,typescript, or adapter-specific
The rest of this page focuses on custom agents — agents you define in code. For managed agents configured via UI, see the Dashboard documentation.
Custom Agent Example
Calling Agents
The API uses input and output only. Request body:{ input: { ... }, stream?: boolean }. Response: { output: ..., execution?: { id, url, type, status?, duration_ms? } }.
Agent templates
Use a template to get standard input/output shapes without defining schemas yourself. Passtemplate when creating an agent:
| Template | Input | Output | Use case |
|---|---|---|---|
prompt (default) | { prompt: string } | string | Single prompt in, text out |
chat | { messages: Message[] } | string | Multi-turn chat; final reply as string |
task | { task: string, ... } | JSON | Task name + params, structured result |
rag | { query: string, messages?: Message[], collectionIds?: string[] } | string | RAG query with optional history/collections |
thread | { messages: Message[] } | Message[] | Multi-turn with tool calls; returns updated thread |
role (e.g. system, developer, user, assistant, tool), content (string or array of parts), and optional tool_calls, tool_call_id, name. Use the runtime’s Message and ToolCall types for type-safe handlers.
Chat agents
Chat agents use thechat template: they expect messages and return a single string (the assistant’s reply).
Calling chat agents
Request body uses input withmessages. Response is { output: string }.
Streaming
Both agents and chat agents support streaming via async generators:stream: true:
Advanced: Custom Input & Output Schemas
For more control, define custom input and output schemas:API Request & Response
The API uses input and output only. Request body:{ input: { ... }, stream?: boolean }. Response: { output: ... }. Discovery (/info) exposes each agent’s input schema and optional metadata:
Agent Type Reference
The agenttype field indicates how the agent was created:
| Type | Description |
|---|---|
managed | Created via the dashboard/CLI |
python | Native Python agent (decorator-based) |
typescript | Native TypeScript agent (factory-based) |
python-langchain | Python agent using LangChain adapter |
python-openai | Python agent using OpenAI adapter |
typescript-vercel-ai | TypeScript agent using Vercel AI adapter |
typescript-langchain | TypeScript agent using LangChain adapter |
Quick Reference
| Prompt / task | Chat / thread | |
|---|---|---|
| Factory | agent() / @agent | agent(..., { template: 'chat' }) / @agent(template="chat") |
| Input | { prompt } or custom (default template: prompt) | { messages } |
| Output | { output } (string or JSON) | Chat: { output } (string). Thread: { output } (Message[]) |
| Use case | Task-oriented, single prompt | Conversations, tool-calling threads |
| Streaming | Yes | Yes |