- The Agent Roundup
- Posts
- 👾 Build Multi-Agent Workflows with OpenAI Agents SDK
👾 Build Multi-Agent Workflows with OpenAI Agents SDK
Learn how to orchestrate multi-agent AI apps using the OpenAI Agents SDK. Features include guardrails, streaming, MCP tools, and dynamic workflows.

Source: MadeByAgents via o3
What Is The Agents SDK?
It’s a lightweight Python framework for building agentic AI apps with small primitives (Agents, Handoffs, Guardrails). It’s designed for multi-agent workflows and tool integration, enabling the orchestration of LLM agents with built-in tracing and function tools.
What Makes It Special?
After testing the framework thoroughly, here is what makes it different from alternatives:
Guardrails On Per Agent Basis
In episode #8, I outlined the importance of guardrails for customer-facing agents in production. The Agents SDK has guardrail support built in and allows adjustment per agent to enforce input/output validation.
Streaming Support
The Agents SDK includes first-class streaming support. You can use Runner.run_streamed()
to receive an asynchronous stream of events token-by-token, or at higher levels like “tool executed” or “agent switched” via RunItemStreamEvent
and AgentUpdatedStreamEvent
. This enables real‑time progress indicators or dynamic UIs that reflect agent activity as it happens.
from agents import Agent, Runner
streaming_agent = Agent(name="Streamer", instructions="Tell me a joke.")
result = Runner.run_streamed(streaming_agent, "Please!")
async for ev in result.stream_events():
print(ev) # See token-level and event-level output as it streams
Responses API or Old Completions API
One of the most powerful aspects of the SDK is its seamless integration with the new Responses API. This API combines the simplicity of Chat Completions with built‑in tool support (web search, file search, computer use) in a single call.
from openai import OpenAI
from agents import Agent, Runner, WebSearchTool
# Auto-uses Responses API with tool support if available
agent = Agent(name="Responder", instructions="Search the web.", tools=[WebSearchTool()])
result = Runner.run_sync(agent, "What's the capital of Spain?")
print(result.final_output)
MCP Support
The SDK supports Model Context Protocol (MCP) servers, enabling agents to call external data sources or business systems (like GitHub, Postgres, Slack) as tools.
Require Agent to Use Specific Tool or Use Auto Mode
You can finely control tool usage per agent. Agents can either be forced to call specific tools via instructions or run in auto mode, where the LLM chooses tools dynamically. Tools are regular Python or MCP-backed functions with auto-generated Pydantic schemas.
Tracing Dashboard
The OpenAI tracing dashboard is enabled by default, which means it’s tracking a lot of different metrics of the workflow. This helps to debug and improve the system over time.
On the other hand, it raises privacy concerns, especially when handling sensitive customer data. The tracing can be disabled or replaced with custom tracing solutions.
Orchestrate Via LLM or More Deterministic Via Functions
Orchestration is flexible. You can build LLM‑led workflows using instructions and handoffs, letting agents decide dynamically who runs next, or implement a more deterministic control flow in pure Python. Handoffs are just treated as tools, and agents can chain or delegate control, enabling both dynamic and scripted workflows.
from agents import Agent, Runner, handoff
agent_a = Agent(name="Analyst", instructions="Analyze.")
agent_b = Agent(name="Reporter", instructions="Report.")
triage = Agent(
name="Triager",
instructions="If analysis needed, handoff to Analyst, else to Reporter.",
handoffs=[agent_a, agent_b]
)
print(Runner.run_sync(triage, "Tell me about the market.").final_output)
Works with Any Model Supported by LiteLLM
The Agents SDK is provider-agnostic. That means it supports a range of different models, from OpenAI models via the Responses API to 100+ other LLMs via LiteLLM integration. But internal tools like the web search and vector file search are only supported by OpenAI models.
Generate Graph with Graphviz
The Agents SDK includes built-in Graphviz visualization via the draw_graph
utility, making it easy to inspect and understand your system. You can generate a directed graph of your entire agent workflow at a glance.
Voice Agent Package
The built-in voice pipeline facilitates the integration of voice capabilities. You can decide which models to use for speech-to-text and text-to-speech conversion. Build real-time voice agents via WebRTC/WebSockets and Jupyter integration.
Dynamic System Prompts
Agents support dynamic system prompts: the instructions
parameter accepts callables that receive the current context and agent, allowing the provisioning of runtime-tailored system prompts. This enables agents to adapt based on state, previous interactions, or metadata, making behavior highly flexible.
agent = Agent(
name="LocaleBot",
instructions=lambda ctx, ag: f"You are a {ctx['locale']} assistant."
)
result = Runner.run_sync(agent, "Hola!")
print(result.final_output)
Bonus: I'm currently working on a new YouTube video where I’ll break down the Agents SDK step by step and build a working agent that showcases many of its features. If you’re curious to see the framework in action, stay tuned.
More Resources
Blog: In-depth articles on AI workflows and practical strategies for growth
AI Tool Collection: Discover and compare validated AI solutions
Consultancy: I help you discover AI potential or train your team