diff --git a/docs/datasets-and-experiments/how-to-experiments/run-experiments.md b/docs/datasets-and-experiments/how-to-experiments/run-experiments.md index 6095c87b62..067aed525f 100644 --- a/docs/datasets-and-experiments/how-to-experiments/run-experiments.md +++ b/docs/datasets-and-experiments/how-to-experiments/run-experiments.md @@ -4,6 +4,14 @@ description: >- example. --- +# Setup + +Make sure you have Phoenix and the instrumentors needed for the experiment setup. For this example we will use the OpenAI instrumentor to trace the LLM calls. + +```bash +pip install arize-phoenix openinference-instrumentation-openai openai +``` + # Run Experiments The key steps of running an experiment are: @@ -116,7 +124,7 @@ def generate_query(question): def execute_query(query): return conn.query(query).fetchdf().to_dict(orient="records") - + def text2sql(question): results = error = None @@ -164,9 +172,12 @@ def has_results(output) -> bool: Instrumenting the LLM will also give us the spans and traces that will be linked to the experiment, and can be examine in the Phoenix UI: ```python -from phoenix.trace.openai import OpenAIInstrumentor +from openinference.instrumentation.openai import OpenAIInstrumentor + +from phoenix.otel import register -OpenAIInstrumentor().instrument() +tracer_provider = register() +OpenAIInstrumentor().instrument(tracer_provider=tracer_provider) ``` #### Run the Task and Evaluators diff --git a/docs/datasets-and-experiments/use-cases-datasets/text2sql.md b/docs/datasets-and-experiments/use-cases-datasets/text2sql.md index 6d2877ab57..08ed7dad73 100644 --- a/docs/datasets-and-experiments/use-cases-datasets/text2sql.md +++ b/docs/datasets-and-experiments/use-cases-datasets/text2sql.md @@ -7,7 +7,7 @@ Let's work through a Text2SQL use case where we are starting from scratch without a nice and clean dataset of questions, SQL queries, or expected responses. ```shell -pip install 'arize-phoenix>=4.6.0' openai duckdb datasets pyarrow pydantic nest_asyncio --quiet +pip install 'arize-phoenix>=4.6.0' openai duckdb datasets pyarrow pydantic nest_asyncio openinference-instrumentation-openai --quiet ``` Let's first start a phoenix server. Note that this is not necessary if you have a phoenix server running already. @@ -21,9 +21,12 @@ px.launch_app() Let's also setup tracing for OpenAI as we will be using their API to perform the synthesis. ```python -from phoenix.trace.openai import OpenAIInstrumentor +from openinference.instrumentation.openai import OpenAIInstrumentor -OpenAIInstrumentor().instrument() +from phoenix.otel import register + +tracer_provider = register() +OpenAIInstrumentor().instrument(tracer_provider=tracer_provider) ``` Let's make sure we can run async code in the notebook. diff --git a/docs/tracing/integrations-tracing/autogen-support.md b/docs/tracing/integrations-tracing/autogen-support.md index d0cae8681d..fd039c395f 100644 --- a/docs/tracing/integrations-tracing/autogen-support.md +++ b/docs/tracing/integrations-tracing/autogen-support.md @@ -8,13 +8,24 @@ AutoGen is a new agent framework from Microsoft that allows for complex Agent cr The AutoGen Agent framework allows creation of multiple agents and connection of those agents to work together to accomplish tasks. +First install dependencies + +```shell +pip install openinference-instrumentation-openai arize-phoenix-otel +``` + +Then instrument the application +``` + ```python -from phoenix.trace.openai.instrumentor import OpenAIInstrumentor -from phoenix.trace.openai import OpenAIInstrumentor +from openinference.instrumentation.openai import OpenAIInstrumentor +from phoenix.otel import register import phoenix as px px.launch_app() -OpenAIInstrumentor().instrument() + +tracer_provider = register() +OpenAIInstrumentor().instrument(tracer_provider=tracer_provider) ``` The Phoenix support is simple in its first incarnation but allows for capturing all of the prompt and responses that occur under the framework between each agent. diff --git a/tutorials/experiments/txt2sql.ipynb b/tutorials/experiments/txt2sql.ipynb index e6b097ab16..637be5469b 100644 --- a/tutorials/experiments/txt2sql.ipynb +++ b/tutorials/experiments/txt2sql.ipynb @@ -23,11 +23,11 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ - "!pip install \"arize-phoenix>=4.6.0\" openai duckdb datasets pyarrow \"pydantic>=2.0.0\" nest_asyncio --quiet" + "!pip install \"arize-phoenix>=4.6.0\" openai duckdb datasets pyarrow \"pydantic>=2.0.0\" nest_asyncio openinference-instrumentation-openai --quiet" ] }, { @@ -61,9 +61,12 @@ "metadata": {}, "outputs": [], "source": [ - "from phoenix.trace.openai import OpenAIInstrumentor\n", + "from openinference.instrumentation.openai import OpenAIInstrumentor\n", + "\n", + "from phoenix.otel import register\n", "\n", - "OpenAIInstrumentor().instrument()" + "tracer_provider = register()\n", + "OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)" ] }, { @@ -75,7 +78,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "metadata": {}, "outputs": [], "source": [ @@ -93,7 +96,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ @@ -141,7 +144,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ @@ -225,7 +228,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 10, "metadata": {}, "outputs": [], "source": [ @@ -273,7 +276,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 12, "metadata": {}, "outputs": [], "source": [ @@ -302,7 +305,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 13, "metadata": {}, "outputs": [], "source": [ @@ -682,7 +685,15 @@ "name": "python3" }, "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", "version": "3.12.4" } },