Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions .github/workflows/test-genai-function-calling.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: test-genai-function-calling

on:
pull_request:
branches:
- main
paths:
- 'genai-function-calling/openai-agents/**'
- '!**/*.md'
- '!**/*.png'

jobs:
test:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4

- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: 3.12

- name: openai-agents
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
pytest --vcr-record=none
working-directory: genai-function-calling/openai-agents
15 changes: 11 additions & 4 deletions genai-function-calling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ and Kibana.

Here are the examples:

* [OpenAI Agents SDK (Python)](openai-agents)
* [Semantic Kernel .NET](semantic-kernel-dotnet)
* [Spring AI (Java)](spring-ai)
* [Vercel AI (Node.js)](vercel-ai)
Expand Down Expand Up @@ -60,16 +61,22 @@ flexibility in defining and testing functions.

## Observability with EDOT

Each example uses a framework with built-in OpenTelemetry instrumentation.
While features vary, each of these produces at least traces, and some also logs
and metrics.
The OpenTelemetry instrumentation approach varies per GenAI framework. Some are
[native][native] (their codebase includes OpenTelemetry code), while others
rely on external instrumentation libraries. Signals vary as well. While all
produce traces, only some produce logs or metrics.

We use Elastic Distributions of OpenTelemetry (EDOT) SDKs to enable these
features and fill in other data, such as HTTP requests underlying the LLM and
tool calls. In doing so, this implements the "zero code instrumentation"
pattern of OpenTelemetry.

Here's an example Kibana screenshot of one of the examples, looked up from a
query like: http://localhost:5601/app/apm/traces?rangeFrom=now-15m&rangeTo=now
query like:

http://localhost:5601/app/apm/traces?rangeFrom=now-15m&rangeTo=now

![Kibana screenshot](./kibana-trace.png)

---
[native]: https://opentelemetry.io/docs/languages/java/instrumentation/#native-instrumentation
Binary file modified genai-function-calling/kibana-trace.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
18 changes: 18 additions & 0 deletions genai-function-calling/openai-agents/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Use glibc-based image with pre-compiled wheels for psutil
FROM python:3.12-slim

# TODO: temporary until openai-agents 0.0.5
RUN apt-get update \
&& apt-get install -y --no-install-recommends git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

RUN --mount=type=cache,target=/root/.cache/pip python -m pip install --upgrade pip

COPY requirements.txt /tmp
RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip edot-bootstrap --action=install

COPY main.py /

CMD [ "python", "main.py" ]
90 changes: 90 additions & 0 deletions genai-function-calling/openai-agents/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Function Calling with OpenAI Agents SDK (Python)

[main.py](main.py) implements the [example application flow][flow] using
[OpenAI Agents SDK (Python)][openai-agents-python].

[Dockerfile](Dockerfile) starts the application with Elastic Distribution
of OpenTelemetry (EDOT) Python, via `opentelemetry-instrument`.

Notably, this shows how to add extra instrumentation to EDOT, as the OpenAI
Agents support is via [OpenInference][openinference].

## Configure

Copy [env.example](env.example) to `.env` and update its `OPENAI_API_KEY`.

An OTLP compatible endpoint should be listening for traces, metrics and logs on
`http://localhost:4317`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.

For example, if Elastic APM server is running locally, edit `.env` like this:
```
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
```

## Run with Docker

```bash
docker compose run --build --rm genai-function-calling
```

## Run with Python

First, set up a Python virtual environment like this:
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install 'python-dotenv[cli]'
```

Next, install required packages:
```bash
pip install -r requirements.txt
```

Now, use EDOT to bootstrap instrumentation (this only needs to happen once):
```bash
edot-bootstrap --action=install
```

Finally, run `main.py` (notice the prefix of `opentelemetry-instrument):
```bash
dotenv run --no-override -- opentelemetry-instrument python main.py
```

## Tests

Tests use [pytest-vcr][pytest-vcr] to capture HTTP traffic for offline unit
testing. Recorded responses keeps test passing considering LLMs are
non-deterministic and the Elasticsearch version list changes frequently.

Run like this:
```bash
pip install -r requirements-dev.txt
pytest
```

OpenAI responses routinely change as they add features, and some may cause
failures. To re-record, delete [cassettes/test_main.yaml][test_main.yaml], and
run pytest with dotenv, so that ENV variables are present:

```bash
rm cassettes/test_main.yaml
dotenv -f ../.env run -- pytest
```

## Notes

The LLM should generate something like "The latest stable version of
Elasticsearch is 8.17.3", unless it hallucinates. Just run it again, if you
see something else.

OpenAI Agents SDK's OpenTelemetry instrumentation is via
[OpenInference][openinference] and only produces traces (not logs or metrics).

---
[flow]: ../README.md#example-application-flow
[openai-agents-python]: https://github.com/openai/openai-agents-python
[pytest-vcr]: https://pytest-vcr.readthedocs.io/
[test_main.yaml]: cassettes/test_main.yaml
[openinference]: https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-openai-agents
Loading