Skip to content

Commit

Permalink
Finish removing references
Browse files Browse the repository at this point in the history
  • Loading branch information
anticorrelator committed May 17, 2024
1 parent a08b31d commit cd780a9
Show file tree
Hide file tree
Showing 6 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion docs/how-to/manage-the-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: >-
## Define Your Inferences

{% hint style="info" %}
For a conceptual overview of inferences, including an explanation of when to use a single inference vs. primary and reference inferences, see [Phoenix Basics](../inferences/inferences.md#datasets).
For a conceptual overview of inferences, including an explanation of when to use a single inference vs. primary and reference inferences, see [Phoenix Basics](../inferences/inferences.md#inferences).
{% endhint %}

To define inferences, you must load your data into a pandas dataframe and [create a matching schema](define-your-schema/). If you have a dataframe `prim_df` and a matching `prim_schema`, you can define inferences named "primary" with
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ They can then easily augment and fine tune the data and verify improved performa

There are two ways export data out of [Arize](https://app.gitbook.com/o/-MB4weB2E-qpBe07nmSL/s/-MAlgpMyBRcl2qFZRQ67/) for further investigation:

1. The easiest way is to click the export button on the Embeddings and Datasets pages. This will produce a code snippet that you can copy into a Python environment and install Phoenix. This code snippet will include the date range you have selected in the [Arize](https://app.gitbook.com/o/-MB4weB2E-qpBe07nmSL/s/-MAlgpMyBRcl2qFZRQ67/) platform, in addition to the datasets you have selected.
1. The easiest way is to click the export button on the Embeddings and Inferences pages. This will produce a code snippet that you can copy into a Python environment and install Phoenix. This code snippet will include the date range you have selected in the [Arize](https://app.gitbook.com/o/-MB4weB2E-qpBe07nmSL/s/-MAlgpMyBRcl2qFZRQ67/) platform, in addition to the inferences you have selected.

<figure><img src="../.gitbook/assets/image (4).png" alt=""><figcaption><p>Export button on Embeddings tab in Arize UI</p></figcaption></figure>

<figure><img src="../.gitbook/assets/image (6).png" alt=""><figcaption><p>Export to Phoenix module in Arize UI</p></figcaption></figure>

2. Users can also query [Arize](https://app.gitbook.com/o/-MB4weB2E-qpBe07nmSL/s/-MAlgpMyBRcl2qFZRQ67/) for data directly using the Arize Python export client. We recommend doing this once you're more comfortable with the in-platform export functionality, as you will need to manually enter in the data ranges and datasets you want to export.
2. Users can also query [Arize](https://app.gitbook.com/o/-MB4weB2E-qpBe07nmSL/s/-MAlgpMyBRcl2qFZRQ67/) for data directly using the Arize Python export client. We recommend doing this once you're more comfortable with the in-platform export functionality, as you will need to manually enter in the data ranges and data you want to export.

```python
os.environ['ARIZE_API_KEY'] = ARIZE_API_KEY
Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart/llm-traces.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ Once you've executed a sufficient number of queries (or chats) to your applicati

## Trace Datasets

Phoenix also support datasets that contain [OpenInference trace](../reference/open-inference.md) data. This allows data from a LangChain and LlamaIndex running instance explored for analysis offline.
Phoenix also support loading data that contains [OpenInference trace](../reference/open-inference.md) data. This allows data from a LangChain and LlamaIndex running instance explored for analysis offline.

There are two ways to extract trace dataframes. The two ways for LangChain are described below.

Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart/phoenix-inferences/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ prod_schema = px.Schema(
{% hint style="info" %}
**When do I need a different schema?**

In general, if both datasets you are visualizing have identical schemas, you can reuse the Schema object.
In general, if both sets of inferences you are visualizing have identical schemas, you can reuse the Schema object.

However, there are often differences between the schema of a primary and reference dataset. For example:

Expand Down
2 changes: 1 addition & 1 deletion docs/setup/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The following environment variables will control how your phoenix server runs.
* **PHOENIX\_PORT:** The port to run the phoenix web server. Defaults to 6006.
* **PHOENIX\_GRPC\_PORT:** The port to run the gRPC OTLP trace collector. Defaults to 4317.
* **PHOENIX\_HOST:** The host to run the phoenix server. Defaults to 0.0.0.0
* **PHOENIX\_WORKING\_DIR:** The directory in which to save, load, and export datasets. This directory must be accessible by both the Phoenix server and the notebook environment. Defaults to `~/.phoenix/`
* **PHOENIX\_WORKING\_DIR:** The directory in which to save, load, and export data. This directory must be accessible by both the Phoenix server and the notebook environment. Defaults to `~/.phoenix/`
* **PHOENIX\_SQL\_DATABASE\_URL:** The SQL database URL to use when logging traces and evals. if you plan on using SQLite, it's advised to to use a persistent volume and simply point the `PHOENIX_WORKING_DIR` to that volume. If URL is not specified, by default Phoenix starts with a file-based SQLite database in a temporary folder, the location of which will be shown at startup. Phoenix also supports PostgresSQL as shown below:
* PostgreSQL, e.g. `postgresql://@host/dbname?user=user&password=password` or `postgresql://user:password@host/dbname`
* SQLite, e.g. `sqlite:///path/to/database.db`
Expand Down
2 changes: 1 addition & 1 deletion docs/tracing/how-to-tracing/llm-evaluations.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ px.Client().log_evaluations(

## Logging Multiple Evaluation DataFrames

Multiple evaluation datasets can be logged by the same `px.Client().log_evaluations()` function call.
Multiple sets of Evaluations can be logged by the same `px.Client().log_evaluations()` function call.

```
px.Client().log_evaluations(
Expand Down

0 comments on commit cd780a9

Please sign in to comment.