OpenAI SDK
Wrap OpenAI clients for use with the Reminix runtime
The OpenAI adapter wraps an AsyncOpenAI client for use with the Reminix runtime.
Installation
pip install reminix[runtime,openai]Usage
from openai import AsyncOpenAI
from reminix.adapters.openai import from_openai
from reminix.runtime import serve
client = AsyncOpenAI()
agent = from_openai(
client,
name="gpt4-agent",
model="gpt-4o",
system="You are a helpful assistant.",
)
serve(agent)Options
| Parameter | Type | Default | Description |
|---|---|---|---|
client | AsyncOpenAI | required | The OpenAI client instance |
name | str | required | Name for the Reminix agent |
model | str | "gpt-4o" | The model to use |
system | str | None | None | Optional system prompt |
metadata | dict | None | Optional metadata for the agent |
Handler Mapping
| Reminix Handler | OpenAI Method |
|---|---|
@agent.invoke | client.chat.completions.create(stream=False) |
@agent.invoke_stream | client.chat.completions.create(stream=True) |
@agent.chat | client.chat.completions.create(stream=False) |
@agent.chat_stream | client.chat.completions.create(stream=True) |
Example: Multi-turn Chat
from openai import AsyncOpenAI
from reminix.adapters.openai import from_openai
from reminix.runtime import serve
client = AsyncOpenAI()
agent = from_openai(
client,
name="conversational-agent",
model="gpt-4o",
system="You are a friendly conversational AI. Remember context from previous messages.",
)
if __name__ == "__main__":
serve(agent, port=8080)Test with:
curl -X POST http://localhost:8080/agent/conversational-agent/chat \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "My name is Alice."},
{"role": "assistant", "content": "Nice to meet you, Alice!"},
{"role": "user", "content": "What is my name?"}
]
}'