Create a ToolLoopAgent with tools and pass it instead of a plain model. The agent will automatically call tools in a loop until it produces a final response.
import { openai } from "@ai-sdk/openai"import { ToolLoopAgent, tool } from "ai"import { z } from "zod"import { VercelAIChatAgent } from "@reminix/vercel-ai"import { serve } from "@reminix/runtime"const getWeather = tool({ description: "Get the current weather for a city", inputSchema: z.object({ city: z.string() }), execute: async ({ city }) => { const data: Record<string, string> = { paris: "Sunny, 22°C", london: "Cloudy, 15°C", tokyo: "Rainy, 18°C", } return data[city.toLowerCase()] ?? `Weather data not available for ${city}` },})const toolAgent = new ToolLoopAgent({ model: openai("gpt-4o-mini"), tools: { getWeather },})const chatbot = new VercelAIChatAgent(toolAgent, { name: "weather-assistant",})serve({ agents: [chatbot] })
ToolLoopAgent works with all three agent types — VercelAIChatAgent, VercelAITaskAgent, and VercelAIThreadAgent.
Use VercelAIThreadAgent for agents that manage full message history.
import { openai } from "@ai-sdk/openai"import { VercelAIThreadAgent } from "@reminix/vercel-ai"import { serve } from "@reminix/runtime"const assistant = new VercelAIThreadAgent(openai("gpt-4o"), { name: "assistant", instructions: "You are a helpful assistant that remembers context.",})serve({ agents: [assistant] })
Thread agents return the complete message array, including the assistant’s response appended to the input messages.