Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: deprecate phoenix.trace.openai #4757

Merged
merged 3 commits into from
Sep 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,14 @@ description: >-
example.
---

# Setup

Make sure you have Phoenix and the instrumentors needed for the experiment setup. For this example we will use the OpenAI instrumentor to trace the LLM calls.

```bash
pip install arize-phoenix openinference-instrumentation-openai openai
```

# Run Experiments

The key steps of running an experiment are:
Expand Down Expand Up @@ -116,7 +124,7 @@ def generate_query(question):

def execute_query(query):
return conn.query(query).fetchdf().to_dict(orient="records")


def text2sql(question):
results = error = None
Expand Down Expand Up @@ -164,9 +172,12 @@ def has_results(output) -> bool:
Instrumenting the LLM will also give us the spans and traces that will be linked to the experiment, and can be examine in the Phoenix UI:

```python
from phoenix.trace.openai import OpenAIInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor

from phoenix.otel import register

OpenAIInstrumentor().instrument()
tracer_provider = register()
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
```

#### Run the Task and Evaluators
Expand Down
9 changes: 6 additions & 3 deletions docs/datasets-and-experiments/use-cases-datasets/text2sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
Let's work through a Text2SQL use case where we are starting from scratch without a nice and clean dataset of questions, SQL queries, or expected responses.

```shell
pip install 'arize-phoenix>=4.6.0' openai duckdb datasets pyarrow pydantic nest_asyncio --quiet
pip install 'arize-phoenix>=4.6.0' openai duckdb datasets pyarrow pydantic nest_asyncio openinference-instrumentation-openai --quiet
```

Let's first start a phoenix server. Note that this is not necessary if you have a phoenix server running already.
Expand All @@ -21,9 +21,12 @@ px.launch_app()
Let's also setup tracing for OpenAI as we will be using their API to perform the synthesis.

```python
from phoenix.trace.openai import OpenAIInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument()
from phoenix.otel import register

tracer_provider = register()
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
```

Let's make sure we can run async code in the notebook.
Expand Down
17 changes: 14 additions & 3 deletions docs/tracing/integrations-tracing/autogen-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,24 @@ AutoGen is a new agent framework from Microsoft that allows for complex Agent cr

The AutoGen Agent framework allows creation of multiple agents and connection of those agents to work together to accomplish tasks.

First install dependencies

```shell
pip install openinference-instrumentation-openai arize-phoenix-otel
```

Then instrument the application
```

```python
from phoenix.trace.openai.instrumentor import OpenAIInstrumentor
from phoenix.trace.openai import OpenAIInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from phoenix.otel import register
import phoenix as px

px.launch_app()
OpenAIInstrumentor().instrument()

tracer_provider = register()
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
```

The Phoenix support is simple in its first incarnation but allows for capturing all of the prompt and responses that occur under the framework between each agent.
Expand Down
31 changes: 21 additions & 10 deletions tutorials/experiments/txt2sql.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"!pip install \"arize-phoenix>=4.6.0\" openai duckdb datasets pyarrow \"pydantic>=2.0.0\" nest_asyncio --quiet"
"!pip install \"arize-phoenix>=4.6.0\" openai duckdb datasets pyarrow \"pydantic>=2.0.0\" nest_asyncio openinference-instrumentation-openai --quiet"
]
},
{
Expand Down Expand Up @@ -61,9 +61,12 @@
"metadata": {},
"outputs": [],
"source": [
"from phoenix.trace.openai import OpenAIInstrumentor\n",
"from openinference.instrumentation.openai import OpenAIInstrumentor\n",
"\n",
"from phoenix.otel import register\n",
"\n",
"OpenAIInstrumentor().instrument()"
"tracer_provider = register()\n",
"OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)"
]
},
{
Expand All @@ -75,7 +78,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -93,7 +96,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -141,7 +144,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -225,7 +228,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -273,7 +276,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -302,7 +305,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -682,7 +685,15 @@
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
Expand Down
Loading