Task-Oriented Agent
Use the @agent decorator for task-oriented agents:
from reminix_runtime import agent, serve
@agent
async def calculator(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
serve(agents=[calculator], port=8080)
The decorator automatically extracts:
- name from the function name (or use
name= to override)
- description from the docstring (first paragraph, or use
description= to override)
- input schema from type hints and defaults
- input field descriptions from docstring
Args: section
- output from the return type hint
Invoke the Agent
The API uses input/output only. Request: { "input": { ... } }. Response: { "output": ... }.
curl -X POST http://localhost:8080/agents/calculator/invoke \
-H "Content-Type: application/json" \
-d '{"input": {"a": 5, "b": 3}}'
# Response: {"output": 8.0}
Use docstrings to provide input field descriptions:
@agent
async def translator(text: str, target: str = "es") -> str:
"""Translate text to another language.
Args:
text: The text to translate
target: Target language code (e.g., "es", "fr", "de")
"""
# ... translation logic
return f"Translated: {text}"
The input field descriptions will appear in the agent’s schema at /info.
Use type hints and defaults for structured input (schema is inferred):
@agent(name="text-processor", description="Process text in various ways")
async def process_text(text: str, operation: str = "uppercase") -> str:
"""Process text with the given operation."""
if operation == "uppercase":
return text.upper()
return text.lower()
Agent templates
Use a template for standard input/output shapes: prompt (default), chat, task, rag, or thread. Messages are OpenAI-style (role, content, and optionally tool_calls, tool_call_id, name). Use the Message and ToolCall types from reminix_runtime for type-safe handlers. Message.tool_calls is list[ToolCall] | None.
| Template | Input | Output |
|---|
prompt | { prompt: str } | str |
chat | { messages: list[Message] } | str |
task | { task: str, ... } | JSON |
rag | { query: str, messages?: list[Message], collectionIds?: list[str] } | str |
thread | { messages: list[Message] } | list[Message] |
Chat agent
Use the chat template for conversational agents. The handler receives messages and returns a string (the assistant’s reply).
from reminix_runtime import agent, serve, Message
@agent(template="chat")
async def assistant(messages: list[Message]) -> str:
"""A helpful assistant."""
last_msg = messages[-1].content if messages else ""
return f"You said: {last_msg}"
serve(agents=[assistant], port=8080)
Invoke the chat agent
Request body: { "input": { "messages": [...] } }. Response: { "output": "string" }.
curl -X POST http://localhost:8080/agents/assistant/invoke \
-H "Content-Type: application/json" \
-d '{"input": {"messages": [{"role": "user", "content": "Hello!"}]}}'
# Response: {"output": "You said: Hello!"}
With context
The @agent decorator passes only input keys to your function. To access request context (e.g. context from the request body), use the Agent class and register a handler that receives the full AgentInvokeRequest.
Streaming
Both agents support streaming via async generators:
from reminix_runtime import agent, serve, Message
# Streaming task agent
@agent
async def streamer(text: str):
"""Stream text word by word."""
for word in text.split():
yield word + " "
# Streaming chat agent
@agent(template="chat")
async def streaming_assistant(messages: list[Message]):
"""Stream responses token by token."""
response = f"You said: {messages[-1].content}" if messages else ""
for char in response:
yield char
serve(agents=[streamer, streaming_assistant], port=8080)
For streaming agents:
stream: true in the request → chunks are sent via SSE
stream: false in the request → chunks are collected and returned as a single response
View agent metadata via the /info endpoint:
curl http://localhost:8080/info
{
"agents": [
{
"name": "calculator",
"type": "agent",
"description": "Add two numbers.",
"input": {
"type": "object",
"properties": { "a": { "type": "number" }, "b": { "type": "number" } },
"required": ["a", "b"]
},
"output": {
"type": "object",
"properties": { "content": { "type": "number" } },
"required": ["content"]
},
"streaming": false
},
{
"name": "assistant",
"type": "agent",
"template": "chat",
"description": "A helpful assistant.",
"input": { ... },
"output": { "type": "string" },
"streaming": false
}
]
}
Integrating with AI Models
Use any AI SDK inside your handlers:
With OpenAI
from openai import AsyncOpenAI
from reminix_runtime import agent, serve, Message
client = AsyncOpenAI()
@agent(template="chat")
async def openai_agent(messages: list[Message]) -> str:
"""Chat with GPT-4."""
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": m.role, "content": m.content} for m in messages]
)
content = response.choices[0].message.content
return content or ""
serve(agents=[openai_agent], port=8080)
With Anthropic
from anthropic import AsyncAnthropic
from reminix_runtime import agent, serve, Message
client = AsyncAnthropic()
@agent(template="chat")
async def claude_agent(messages: list[Message]) -> str:
"""Chat with Claude."""
# Extract system message if present
system = None
chat_messages = []
for m in messages:
if m.role == "system":
system = m.content
else:
chat_messages.append({"role": m.role, "content": m.content})
response = await client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
system=system or "You are a helpful assistant.",
messages=chat_messages
)
content = response.content[0].text
return content
serve(agents=[claude_agent], port=8080)
For simpler integration with AI frameworks, use our pre-built adapters like reminix-openai, reminix-anthropic, or reminix-langchain.
Multiple Agents
Serve multiple agents from one server:
from reminix_runtime import agent, serve, Message
@agent
async def summarizer(text: str) -> str:
"""Summarize text."""
return text[:100] + "..."
@agent
async def translator(text: str, target: str = "es") -> str:
"""Translate text."""
return f"Translated to {target}: {text}"
@agent(template="chat")
async def assistant(messages: list[Message]) -> str:
"""A helpful assistant."""
return f"You said: {messages[-1].content}" if messages else "Hello!"
# Serve all agents
serve(agents=[summarizer, translator, assistant], port=8080)
Advanced: Agent Class
For more control, use the Agent class directly:
from reminix_runtime import Agent, ExecuteRequest, serve
agent = Agent("my-agent", metadata={"version": "1.0"})
@agent.handler
async def handle_execute(request: ExecuteRequest) -> dict:
prompt = request.input.get("prompt", "")
return {"content": f"Processed: {prompt}"}
# Optional: streaming handler
@agent.stream_handler
async def handle_stream(request: ExecuteRequest):
prompt = request.input.get("prompt", "")
for word in prompt.split():
yield word + " "
serve(agents=[agent], port=8080)
Serverless Deployment
Use to_asgi() for serverless deployments:
# AWS Lambda with Mangum
from mangum import Mangum
from reminix_runtime import agent
@agent
async def my_agent(prompt: str) -> str:
return f"Completed: {prompt}"
handler = Mangum(my_agent.to_asgi())
Next Steps