Skip to content

Commit

Permalink
docs: update readme quickstart (#503)
Browse files Browse the repository at this point in the history
  • Loading branch information
axiomofjoy authored Apr 4, 2023
1 parent 3f210b6 commit 756108e
Showing 1 changed file with 58 additions and 10 deletions.
68 changes: 58 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
<br/>
<br/>
<a href="https://join.slack.com/t/arize-ai/shared_invite/zt-1px8dcmlf-fmThhDFD_V_48oU7ALan4Q">
<img src="https://img.shields.io/badge/slack-Arize%20AI%20Community-blue.svg?logo=slack"/>
<img src="https://img.shields.io/static/v1?message=Community&logo=slack&labelColor=grey&color=blue&logoColor=white&label=%20"/>
</a>
<a href="https://pypi.org/project/arize-phoenix/">
<img src="https://img.shields.io/pypi/v/arize-phoenix?color=blue">
Expand All @@ -25,28 +25,76 @@ Phoenix provides MLOps insights at lightning speed with zero-config observabilit
pip install arize-phoenix
```

## Try it out
## Quickstart

In this section, you will get Phoenix up and running with a few lines of code.
[![Open in Colab](https://img.shields.io/static/v1?message=Open%20in%20Colab\&logo=googlecolab\&labelColor=grey\&color=blue\&logoColor=orange\&label=%20)](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/quickstart.ipynb) [![Open in GitHub](https://img.shields.io/static/v1?message=Open%20in%20GitHub\&logo=github\&labelColor=grey\&color=blue\&logoColor=white\&label=%20)](https://github.com/Arize-ai/phoenix/blob/main/tutorials/quickstart.ipynb)

After installing `arize-phoenix` in your Jupyter or Colab environment, open your notebook and run
Import libraries.

```python
from dataclasses import replace
import pandas as pd
import phoenix as px
```

Download curated datasets and load them into pandas DataFrames.

datasets = px.load_example("sentiment_classification_language_drift")
session = px.launch_app(datasets.primary, datasets.reference)
```python
train_df = pd.read_parquet(
"https://storage.googleapis.com/arize-assets/phoenix/datasets/unstructured/cv/human-actions/human_actions_training.parquet"
)
prod_df = pd.read_parquet(
"https://storage.googleapis.com/arize-assets/phoenix/datasets/unstructured/cv/human-actions/human_actions_production.parquet"
)
```

Next, visualize your embeddings and inspect problematic clusters of your production data.
Define schemas that tell Phoenix which columns of your DataFrames correspond to features, predictions, actuals (i.e., ground truth), embeddings, etc.

```python
train_schema = px.Schema(
timestamp_column_name="prediction_ts",
prediction_label_column_name="predicted_action",
actual_label_column_name="actual_action",
embedding_feature_column_names={
"image_embedding": px.EmbeddingColumnNames(
vector_column_name="image_vector",
link_to_data_column_name="url",
),
},
)
prod_schema = replace(train_schema, actual_label_column_name=None)
```

Don't forget to close the app when you're done.
Define your production and training datasets.

```python
prod_ds = px.Dataset(prod_df, prod_schema)
train_ds = px.Dataset(train_df, train_schema)
```
px.close_app()

Launch the app.

```python
session = px.launch_app(prod_ds, train_ds)
```

For more details, check out the [Sentiment Classification Tutorial](./tutorials/sentiment_classification_tutorial.ipynb).
You can open Phoenix by copying and pasting the output of `session.url` into a new browser tab.

```python
session.url
```

Alternatively, you can open the Phoenix UI in your notebook with

```python
session.view()
```

When you're done, don't forget to close the app.

```python
px.close_app()
```

## Documentation

Expand Down

0 comments on commit 756108e

Please sign in to comment.