Framework Adapters
Wrap popular AI frameworks for use with the Reminix runtime
Reminix provides adapters that let you wrap agents from popular AI frameworks and serve them using the Reminix runtime. This allows you to:
- Reuse existing agents — No need to rewrite your LangChain or OpenAI-based agents
- Unified API — All agents expose the same
/invokeand/chatendpoints - Streaming support — Built-in streaming for all adapters
- Dashboard integration — Monitor all your agents in one place
Installation
Adapters are included in the reminix package. Install with the optional dependencies you need:
# Core runtime (required)
pip install reminix[runtime]
# Individual adapter dependencies
pip install reminix[openai] # For OpenAI
pip install reminix[anthropic] # For Anthropic
pip install reminix[langchain] # For LangChain
pip install reminix[langgraph] # For LangGraph
pip install reminix[llamaindex] # For LlamaIndex
# Or install all adapters at once
pip install reminix[adapters]
# Or install everything
pip install reminix[all]Available Adapters
| Framework | Adapter Functions |
|---|---|
| LangChain | from_langchain_agent, from_agent_executor, from_chat_model |
| LangGraph | from_compiled_graph |
| LlamaIndex | from_query_engine, from_chat_engine, from_agent |
| OpenAI SDK | from_openai |
| Anthropic SDK | from_anthropic |
Quick Example
from openai import AsyncOpenAI
from reminix.adapters.openai import from_openai
from reminix.runtime import serve
client = AsyncOpenAI()
agent = from_openai(client, name="gpt4-agent", model="gpt-4o")
serve(agent)That's it! Your OpenAI-powered agent now has:
POST /agent/gpt4-agent/invoke— Single-turn requestsPOST /agent/gpt4-agent/chat— Multi-turn conversations- Streaming support with
stream: true - Health checks and discovery endpoints