Overview
Reminix supports serving multiple agents from a single project. This is useful when you have related agents that share dependencies or need to be deployed together.
When to Use Multiple Agents
Use multiple agents when:
- You have specialized agents for different tasks (e.g., summarizer, translator, classifier)
- Agents share common dependencies or configuration
- You want to deploy and scale related agents together
- You’re building a multi-agent system with orchestration
Use separate projects when:
- Agents have different scaling requirements
- Agents have conflicting dependencies
- Teams work independently on different agents
- Agents need separate deployment lifecycles
Basic Example
Serve multiple agents from one server:
from reminix_runtime import agent, serve, Message
# Agent 1: Summarizer
@agent
async def summarizer(text: str) -> str:
"""Summarize text."""
return f"Summary: {text[:100]}..."
# Agent 2: Translator
@agent
async def translator(text: str, target: str = "es") -> str:
"""Translate text."""
return f"Translated to {target}: {text}"
# Agent 3: Classifier
@agent
async def classifier(text: str) -> dict:
"""Classify text."""
return {"category": "general", "confidence": 0.95}
# Agent 4: Chat assistant
@agent(template="chat")
async def assistant(messages: list[Message]) -> str:
"""A conversational assistant."""
return f"You said: {messages[-1].content}" if messages else "Hello!"
# Serve all agents on one server
serve(agents=[summarizer, translator, classifier, assistant], port=8080)
With Framework Adapters
When using framework adapters like LangChain or OpenAI, use wrap_agent + serve instead of serve_agent:
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent, AgentExecutor
from reminix_langchain import wrap_agent
from reminix_runtime import serve
# Create your LangChain agents
llm = ChatOpenAI(model="gpt-4o")
research_agent = create_openai_functions_agent(llm, research_tools, research_prompt)
research_executor = AgentExecutor(agent=research_agent, tools=research_tools)
writing_agent = create_openai_functions_agent(llm, writing_tools, writing_prompt)
writing_executor = AgentExecutor(agent=writing_agent, tools=writing_tools)
# Wrap each agent
research = wrap_agent(research_executor, name="research-agent")
writer = wrap_agent(writing_executor, name="writing-agent")
# Serve all agents together
serve(agents=[research, writer], port=8080)
Use serve_agent for single-agent projects and wrap_agent + serve for multi-agent projects.
Calling Specific Agents
When you have multiple agents, specify which agent to call by name:
from reminix import Reminix
client = Reminix()
# Call the summarizer agent
summary = client.agents.invoke(
"summarizer",
text="Long article content..."
)
print(summary["output"])
# Call the translator agent
translation = client.agents.invoke(
"translator",
text="Hello world",
target="es"
)
print(translation["output"])
# Call the chat assistant
response = client.agents.invoke(
"assistant",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response["content"])
Agent Discovery
The /info endpoint returns all available agents:
curl http://localhost:8080/info
{
"agents": [
{
"name": "summarizer",
"type": "agent",
"input": { ... },
"output": { ... },
"streaming": false
},
{
"name": "translator",
"type": "agent",
"input": { ... },
"output": { ... },
"streaming": false
},
{
"name": "assistant",
"type": "agent",
"template": "chat",
"input": { ... },
"output": { ... },
"streaming": false
}
]
}
Naming Conventions
Use clear, descriptive names for your agents:
# Good - descriptive and specific
@agent(name="document-summarizer")
async def summarizer(text: str) -> str: ...
@agent(name="text-translator")
async def translator(text: str, target: str) -> str: ...
# Avoid - too generic
@agent(name="agent1")
async def agent1(x: str) -> str: ...
Shared Resources
Agents in the same project can share resources:
from openai import AsyncOpenAI
from reminix_runtime import agent, serve
# Shared client
openai_client = AsyncOpenAI()
# Shared configuration
MODEL = "gpt-4o"
@agent
async def summarizer(text: str) -> str:
"""Summarize text using GPT-4."""
response = await openai_client.chat.completions.create(
model=MODEL,
messages=[{"role": "user", "content": f"Summarize: {text}"}]
)
return response.choices[0].message.content
@agent
async def translator(text: str, target: str = "es") -> str:
"""Translate text using GPT-4."""
response = await openai_client.chat.completions.create(
model=MODEL,
messages=[{"role": "user", "content": f"Translate to {target}: {text}"}]
)
return response.choices[0].message.content
serve(agents=[summarizer, translator], port=8080)
Project Structure
Recommended structure for multi-agent projects:
# main.py
from agents.summarizer import summarizer
from agents.translator import translator
from agents.classifier import classifier
from reminix_runtime import serve
import os
if __name__ == "__main__":
port = int(os.environ.get("PORT", 8080))
serve(agents=[summarizer, translator, classifier], port=port)
Mixing Frameworks
Mixing agents from different frameworks (e.g., LangChain + OpenAI adapter) is technically possible but not officially supported. For best results, use a single framework per project.
# Possible but unsupported - use at your own risk
from reminix_langchain import wrap_agent as wrap_langchain
from reminix_openai import wrap_agent as wrap_openai
from reminix_runtime import serve
langchain_agent = wrap_langchain(executor, name="langchain-agent")
openai_agent = wrap_openai(client, name="openai-agent")
# This works, but is not officially supported
serve(agents=[langchain_agent, openai_agent], port=8080)
Next Steps