You Write the Agent.
We Handle the Rest.
Wrap your existing code in one line, deploy from the dashboard, call via SDK.
Your agent, live and streaming to your users in minutes.
How It Works
From code to production-ready infrastructure
Write Your Agent
Use any framework you want
Deploy
Push to Reminix from the dashboard
Call via SDK
Python & TypeScript with streaming
Scale
We handle all infrastructure for you
Wrap Your Agent
Use our adapters for LangChain, Vercel AI, OpenAI, and more.
# main.py
from langchain.agents import create_agent, tool
from reminix_langchain import wrap
from reminix_runtime import serve
@tooldef get_weather(city: str) -> str:
"""Get weather for a city.""" return f"72°F and sunny in {city}"agent = create_agent(llm, [get_weather])
serve([wrap(agent, "weather-agent")])Call via SDK
Deploy from dashboard, then call with Python or TypeScript.
# Call your deployed agent
from reminix import Reminix
client = Reminix()
response = client.agents.invoke(
"weather-agent", input={"city": "San Francisco"})
# Response
{"output": "72°F and sunny in San Francisco"}