Skip to content

Commit

Permalink
docs: update markdownss for px.Client().log_evaluations()
Browse files Browse the repository at this point in the history
  • Loading branch information
RogerHYang committed Feb 16, 2024
1 parent 91cd6e5 commit a27d055
Show file tree
Hide file tree
Showing 4 changed files with 10 additions and 10 deletions.
8 changes: 4 additions & 4 deletions docs/how-to/define-your-schema/llm-evaluations.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The evaluations dataframe can be sent to Phoenix as follows. Note that the name
```python
from phoenix.trace import SpanEvaluations

px.log_evaluations(
px.Client().log_evaluations(
SpanEvaluations(
dataframe=qa_correctness_eval_df,
eval_name="Q&A Correctness",
Expand All @@ -43,7 +43,7 @@ The evaluations dataframe can be sent to Phoenix as follows. Note that the name
```python
from phoenix.trace import DocumentEvaluations

px.log_evaluations(
px.Client().log_evaluations(
DocumentEvaluations(
dataframe=document_relevance_eval_df,
eval_name="Relevance",
Expand All @@ -53,10 +53,10 @@ px.log_evaluations(

## Logging Multiple Evaluation DataFrames

Multiple evaluation datasets can be logged by the same `px.log_evaluations()` function call.
Multiple evaluation datasets can be logged by the same `px.Client().log_evaluations()` function call.

```
px.log_evaluations(
px.Client().log_evaluations(
SpanEvaluations(
dataframe=qa_correctness_eval_df,
eval_name="Q&A Correctness",
Expand Down
4 changes: 2 additions & 2 deletions docs/llm-evals/quickstart-retrieval-evals/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ qa_correctness_eval["score"] = (
).astype(int)

# Logs the Evaluations back to the Phoenix User Interface (Optional)
px.log_evaluations(
px.Client().log_evaluations(
SpanEvaluations(eval_name="Hallucination", dataframe=hallucination_eval),
SpanEvaluations(eval_name="QA Correctness", dataframe=qa_correctness_eval),
)
Expand Down Expand Up @@ -100,7 +100,7 @@ retrieved_documents_eval["score"] = (
retrieved_documents_eval.label[~retrieved_documents_eval.label.isna()] == "relevant"
).astype(int)

px.log_evaluations(DocumentEvaluations(eval_name="Relevance", dataframe=retrieved_documents_eval))
px.Client().log_evaluations(DocumentEvaluations(eval_name="Relevance", dataframe=retrieved_documents_eval))

```

Expand Down
4 changes: 2 additions & 2 deletions docs/quickstart/evals.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,11 +117,11 @@ Log your evaluations to your running Phoenix session.
```python
from phoenix.trace import DocumentEvaluations, SpanEvaluations

px.log_evaluations(
px.Client().log_evaluations(
SpanEvaluations(eval_name="Hallucination", dataframe=hallucination_eval_df),
SpanEvaluations(eval_name="QA Correctness", dataframe=qa_correctness_eval_df),
DocumentEvaluations(eval_name="Relevance", dataframe=relevance_eval_df),
)
px.log_evaluations(DocumentEvaluations(eval_name="Relevance", dataframe=relevance_eval_df))
```

Your evaluations should now appear as annotations on your spans in Phoenix!
Expand Down
4 changes: 2 additions & 2 deletions docs/use-cases/rag-evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -500,7 +500,7 @@ We have now evaluated our RAG system's retrieval performance. Let's send these e
```python
from phoenix.trace import DocumentEvaluations, SpanEvaluations

px.log_evaluations(
px.Client().log_evaluations(
SpanEvaluations(dataframe=ndcg_at_2, eval_name="ndcg@2"),
SpanEvaluations(dataframe=precision_at_2, eval_name="precision@2"),
DocumentEvaluations(dataframe=retrieved_documents_relevance_df, eval_name="relevance"),
Expand Down Expand Up @@ -578,7 +578,7 @@ Since we have evaluated our RAG system's QA performance and Hallucinations perfo
```python
from phoenix.trace import SpanEvaluations

px.log_evaluations(
px.Client().log_evaluations(
SpanEvaluations(dataframe=qa_correctness_eval_df, eval_name="Q&A Correctness"),
SpanEvaluations(dataframe=hallucination_eval_df, eval_name="Hallucination"),
)
Expand Down

0 comments on commit a27d055

Please sign in to comment.