Skip to main content

Overview

reminix-openai provides a simple wrapper for the OpenAI API. This is a model-only adapter for basic chat completions without tool calling.
For agents with tool calling, use LangChain or LangGraph instead.

Installation

pip install reminix-openai
This will also install reminix-runtime as a dependency.

Quick Start

Wrap an OpenAI client and serve it:
from openai import AsyncOpenAI
from reminix_openai import serve_agent

client = AsyncOpenAI()

if __name__ == "__main__":
    serve_agent(client, name="chat-agent", model="gpt-4o", port=8080)

Configuration

agent = wrap_agent(
    client,
    name="my-agent",
    model="gpt-4o"  # Model to use for completions
)

Multiple Agents

For multi-agent projects, use wrap_agent + serve instead of serve_agent:
from reminix_openai import wrap_agent
from reminix_runtime import serve

gpt4 = wrap_agent(client, name="gpt4-agent", model="gpt-4o")
gpt35 = wrap_agent(client, name="gpt35-agent", model="gpt-3.5-turbo")

serve(agents=[gpt4, gpt35], port=8080)
See Multiple Agents for detailed guidance on multi-agent projects.

Usage

With Prompt

Single request/response with a prompt:
from reminix import Reminix

client = Reminix()

response = client.agents.invoke(
    "chat-agent",
    prompt="Explain quantum computing in one sentence"
)

print(response.output)

With Messages

For conversations with message history:
response = client.agents.invoke(
    "chat-agent",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is Python?"},
        {"role": "assistant", "content": "Python is a programming language."},
        {"role": "user", "content": "What are its main uses?"}
    ]
)

print(response.output)

Streaming

For real-time streaming responses:
for chunk in client.agents.invoke(
    "chat-agent",
    prompt="Tell me a story",
    stream=True
):
    print(chunk, end="", flush=True)

When to Use This Adapter

Use reminix-openai when:
  • You need simple chat completions
  • No tool calling required
  • Direct model access is sufficient
Use LangChain/LangGraph instead when:
  • You need tool calling / function calling
  • You need agents with memory
  • You need complex workflows

Deployment

See the Deployment guide for deploying to Reminix, or Self-Hosting for deploying on your own infrastructure. Set your OpenAI API key in Project Settings → Secrets.

Next Steps