Overview
reminix-langchain lets you wrap existing LangChain agents and deploy them to Reminix with minimal changes.
Installation
pip install reminix-langchain
This will also install reminix-runtime as a dependency.
Quick Start
Wrap your LangChain agent and serve it:
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langchain_core.prompts import ChatPromptTemplate
from reminix_langchain import serve_agent
# Your existing LangChain setup
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_functions_agent(llm, [search], prompt)
executor = AgentExecutor(agent=agent, tools=[search])
# Wrap and serve
if __name__ == "__main__":
serve_agent(executor, name="search-agent", port=8080)
Wrapping Different Agent Types
OpenAI Functions Agent
from langchain.agents import create_openai_functions_agent, AgentExecutor
from reminix_langchain import wrap_agent
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
reminix_agent = wrap_agent(executor, name="my-agent")
ReAct Agent
from langchain.agents import create_react_agent, AgentExecutor
from reminix_langchain import wrap_agent
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
reminix_agent = wrap_agent(executor, name="react-agent")
Structured Chat Agent
from langchain.agents import create_structured_chat_agent, AgentExecutor
from reminix_langchain import wrap_agent
agent = create_structured_chat_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
reminix_agent = wrap_agent(executor, name="chat-agent")
Configuration
reminix_agent = wrap_agent(
executor,
name="my-agent",
description="What this agent does",
# Map LangChain input/output
input_key="input", # LangChain input key
output_key="output", # LangChain output key
# Streaming support
streaming=True
)
Multiple Agents
For multi-agent projects, use wrap_agent + serve instead of serve_agent:
from reminix_langchain import wrap_agent
from reminix_runtime import serve
research = wrap_agent(research_executor, name="research-agent")
writer = wrap_agent(writing_executor, name="writing-agent")
serve(agents=[research, writer], port=8080)
With Memory
LangChain agents with memory work too:
from langchain.memory import ConversationBufferMemory
from reminix_langchain import wrap_agent
memory = ConversationBufferMemory(return_messages=True)
executor = AgentExecutor(agent=agent, tools=tools, memory=memory)
reminix_agent = wrap_agent(executor, name="chat-agent")
Memory persists only within a session. For cross-request memory, implement external storage.
Usage
Once deployed, call your agent using invoke. See Agents for detailed guidance.
For task-oriented operations:
from reminix import Reminix
client = Reminix()
response = client.agents.invoke(
"search-agent",
input="Search for Python tutorials"
)
print(response.output)
With Messages
For conversations with message history:
response = client.agents.invoke(
"search-agent",
messages=[
{"role": "user", "content": "Search for Python tutorials"},
{"role": "assistant", "content": "Here are some tutorials..."},
{"role": "user", "content": "Which one is best for beginners?"}
]
)
print(response.output)
Streaming
For real-time streaming responses:
for chunk in client.agents.invoke(
"search-agent",
input="Tell me about Python",
stream=True
):
print(chunk, end="", flush=True)
Deployment
See the Deployment guide for deploying to Reminix, or Self-Hosting for deploying on your own infrastructure.
Next Steps