Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
4830f59
Implement OpenAI Agents span processing
nagkumar91 Oct 7, 2025
e279865
Merge branch 'main' into tracers-and-spans
nagkumar91 Oct 8, 2025
44b91e8
Update OpenAI Agents changelog with PR references
nagkumar91 Oct 8, 2025
1d78868
Add OpenAI Agents manual and zero-code examples
nagkumar91 Oct 8, 2025
f0154ea
Load dotenv in OpenAI Agents examples
nagkumar91 Oct 8, 2025
ed61f0d
Update the tracer and finalize tests
nagkumar91 Oct 8, 2025
9633585
Capture spans from zero code sample
nagkumar91 Oct 8, 2025
eefc31b
Merge branch 'main' into tracers-and-spans
nagkumar91 Oct 9, 2025
ba45d16
Default OpenAI agent trace start to now
nagkumar91 Oct 9, 2025
20e80a3
Annotate OpenAI trace provider helper
nagkumar91 Oct 9, 2025
5165bad
Remove OpenAI Agents system env override
nagkumar91 Oct 9, 2025
fd707d3
Use gen_ai.provider.name for OpenAI Agents spans
nagkumar91 Oct 9, 2025
523e7e2
Support new SDK InMemorySpanExporter import in tests
nagkumar91 Oct 9, 2025
3c5fd9a
Ensure OpenAI agent span names include model when available
nagkumar91 Oct 9, 2025
c032131
Handle agent creation spans in OpenAI Agents instrumentation
nagkumar91 Oct 9, 2025
b3cc03a
Allow overriding OpenAI agent name via environment variable
nagkumar91 Oct 9, 2025
c813d08
Define span type constants for OpenAI Agents instrumentation
nagkumar91 Oct 9, 2025
0ec1c82
Add OpenAI Agents response and completion span tests
nagkumar91 Oct 9, 2025
407fdfb
Match response finish reasons tuple in tests
nagkumar91 Oct 9, 2025
0125626
Add workflow root span support and handoff example
nagkumar91 Oct 9, 2025
1c34387
Merge branch 'main' into workflow-root-span
nagkumar91 Oct 10, 2025
1981410
Merge branch 'main' into workflow-root-span
nagkumar91 Oct 10, 2025
147b9d1
Merge branch 'main' into workflow-root-span
nagkumar91 Oct 13, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
examples/.env
examples/openai_agents_multi_agent_travel/.env
examples/**/.env
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- Initial barebones package skeleton: minimal instrumentor stub, version module,
and packaging metadata/entry point.
([#3805](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3805))
- Implement OpenAI Agents span processing aligned with GenAI semantic conventions.
([#3817](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3817))
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment and adjust if you use a non-default OTLP collector endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc

OTEL_SERVICE_NAME=opentelemetry-python-openai-agents-handoffs

# Optionally override the agent name reported on spans
# OTEL_GENAI_AGENT_NAME=Travel Concierge
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
OpenTelemetry OpenAI Agents Handoff Example
==========================================

This example shows how the OpenTelemetry OpenAI Agents instrumentation captures
spans in a small multi-agent workflow. Three agents collaborate: a primary
concierge, a concise assistant with a random-number tool, and a Spanish
specialist reached through a handoff. Running the sample produces
``invoke_agent`` spans for each agent as well as an ``execute_tool`` span for
the random-number function.

Setup
-----

1. Copy `.env.example <.env.example>`_ to `.env` and populate it with your real
``OPENAI_API_KEY``. Adjust the OTLP exporter settings if your collector does
not listen on ``http://localhost:4317``.
2. Create a virtual environment and install the dependencies:

::

python3 -m venv .venv
source .venv/bin/activate
pip install "python-dotenv[cli]"
pip install -r requirements.txt

Run
---

Execute the workflow with ``dotenv`` so the environment variables from ``.env``
are loaded automatically:

::

dotenv run -- python main.py

The script emits a short transcript to stdout while spans stream to the OTLP
endpoint defined in your environment. You should see multiple
``invoke_agent`` spans (one per agent) and an ``execute_tool`` span for the
random-number helper triggered during the run.
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
# pylint: skip-file
"""Multi-agent handoff example instrumented with OpenTelemetry."""

from __future__ import annotations

import asyncio
import json
import random

from agents import Agent, HandoffInputData, Runner, function_tool, handoff
from agents import trace as agent_trace
from agents.extensions import handoff_filters
from agents.models import is_gpt_5_default
from dotenv import load_dotenv

from opentelemetry import trace as otel_trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
OTLPSpanExporter,
)
from opentelemetry.instrumentation.openai_agents import (
OpenAIAgentsInstrumentor,
)
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor


def configure_otel() -> None:
"""Configure the OpenTelemetry SDK and enable the Agents instrumentation."""

provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
otel_trace.set_tracer_provider(provider)

OpenAIAgentsInstrumentor().instrument(tracer_provider=provider)


@function_tool
def random_number_tool(maximum: int) -> int:
"""Return a random integer between 0 and ``maximum``."""

return random.randint(0, maximum)


def spanish_handoff_message_filter(
handoff_message_data: HandoffInputData,
) -> HandoffInputData:
"""Trim the message history forwarded to the Spanish-speaking agent."""

if is_gpt_5_default():
# When GPT-5 is enabled we skip additional filtering.
return HandoffInputData(
input_history=handoff_message_data.input_history,
pre_handoff_items=tuple(handoff_message_data.pre_handoff_items),
new_items=tuple(handoff_message_data.new_items),
)

filtered = handoff_filters.remove_all_tools(handoff_message_data)
history = (
tuple(filtered.input_history[2:])
if isinstance(filtered.input_history, tuple)
else filtered.input_history[2:]
)

return HandoffInputData(
input_history=history,
pre_handoff_items=tuple(filtered.pre_handoff_items),
new_items=tuple(filtered.new_items),
)


assistant = Agent(
name="Assistant",
instructions="Be extremely concise.",
tools=[random_number_tool],
)

spanish_assistant = Agent(
name="Spanish Assistant",
instructions="You only speak Spanish and are extremely concise.",
handoff_description="A Spanish-speaking assistant.",
)

concierge = Agent(
name="Concierge",
instructions=(
"Be a helpful assistant. If the traveler switches to Spanish, handoff to"
" the Spanish specialist. Use the random number tool when asked for"
" numbers."
),
handoffs=[
handoff(spanish_assistant, input_filter=spanish_handoff_message_filter)
],
)


async def run_workflow() -> None:
"""Execute a conversation that triggers tool calls and handoffs."""

with agent_trace(workflow_name="Travel concierge handoff"):
# Step 1: Basic conversation with the initial assistant.
result = await Runner.run(
assistant,
input="I'm planning a trip to Madrid. Can you help?",
)

print("Step 1 complete")

# Step 2: Ask for a random number to exercise the tool span.
result = await Runner.run(
assistant,
input=result.to_input_list()
+ [
{
"content": "Pick a lucky number between 0 and 20",
"role": "user",
}
],
)

print("Step 2 complete")

# Step 3: Continue the conversation with the concierge agent.
result = await Runner.run(
concierge,
input=result.to_input_list()
+ [
{
"content": "Recommend some sights in Madrid for a weekend trip.",
"role": "user",
}
],
)

print("Step 3 complete")

# Step 4: Switch to Spanish to cause a handoff to the specialist.
result = await Runner.run(
concierge,
input=result.to_input_list()
+ [
{
"content": "Por favor habla en español. ¿Puedes resumir el plan?",
"role": "user",
}
],
)

print("Step 4 complete")

print("\n=== Conversation Transcript ===\n")
for message in result.to_input_list():
print(json.dumps(message, indent=2, ensure_ascii=False))


def main() -> None:
load_dotenv()
configure_otel()
asyncio.run(run_workflow())


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
openai-agents~=0.3.3
python-dotenv~=1.0

opentelemetry-sdk~=1.36.0
opentelemetry-exporter-otlp-proto-grpc~=1.36.0
opentelemetry-instrumentation-openai-agents~=0.1.0.dev
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment and adjust if you use a non-default OTLP collector endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc

OTEL_SERVICE_NAME=opentelemetry-python-openai-agents-manual

# Optionally override the agent name reported on spans
# OTEL_GENAI_AGENT_NAME=Travel Concierge
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
OpenTelemetry OpenAI Agents Instrumentation Example
===================================================

This example demonstrates how to manually configure the OpenTelemetry SDK
alongside the OpenAI Agents instrumentation.

Running `main.py <main.py>`_ produces spans for the end-to-end agent run,
including tool invocations and model generations. Spans are exported through
OTLP/gRPC to the endpoint configured in the environment.

Setup
-----

1. Copy `.env.example <.env.example>`_ to `.env` and update it with your real
``OPENAI_API_KEY``. If your
OTLP collector is not reachable via ``http://localhost:4317``, adjust the
endpoint variables as needed.
2. Create a virtual environment and install the dependencies:

::

python3 -m venv .venv
source .venv/bin/activate
pip install "python-dotenv[cli]"
pip install -r requirements.txt

Run
---

Execute the sample with ``dotenv`` so the environment variables from ``.env``
are applied:

::

dotenv run -- python main.py

The script automatically loads environment variables from ``.env`` so running
``python main.py`` directly also works if the shell already has the required
values exported.

You should see the agent response printed to the console while spans export to
your configured observability backend.
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# pylint: skip-file
"""Manual OpenAI Agents instrumentation example."""

from __future__ import annotations

from agents import Agent, Runner, function_tool
from dotenv import load_dotenv

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
OTLPSpanExporter,
)
from opentelemetry.instrumentation.openai_agents import (
OpenAIAgentsInstrumentor,
)
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor


def configure_otel() -> None:
"""Configure the OpenTelemetry SDK for exporting spans."""

provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)

OpenAIAgentsInstrumentor().instrument(tracer_provider=provider)


@function_tool
def get_weather(city: str) -> str:
"""Return a canned weather response for the requested city."""

return f"The forecast for {city} is sunny with pleasant temperatures."


def run_agent() -> None:
"""Create a simple agent and execute a single run."""

assistant = Agent(
name="Travel Concierge",
instructions=(
"You are a concise travel concierge. Use the weather tool when the"
" traveler asks about local conditions."
),
tools=[get_weather],
)

result = Runner.run_sync(
assistant,
"I'm visiting Barcelona this weekend. How should I pack?",
)

print("Agent response:")
print(result.final_output)


def main() -> None:
load_dotenv()
configure_otel()
run_agent()


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
openai-agents~=0.3.3
python-dotenv~=1.0

opentelemetry-sdk~=1.36.0
opentelemetry-exporter-otlp-proto-grpc~=1.36.0
opentelemetry-instrumentation-openai-agents~=0.1.0.dev
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment and adjust if you use a non-default OTLP collector endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc

OTEL_SERVICE_NAME=opentelemetry-python-openai-agents-zero-code

# Enable auto-instrumentation for logs if desired
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true

# Optionally override the agent name reported on spans
# OTEL_GENAI_AGENT_NAME=Travel Concierge
Loading
Loading