Skip to content

Commit c8c7e15

Browse files
authored
[MLOB-4216] update submit evaluation deprecation warning (#32260)
* update submit evaluation deprecation warning * update more instances of submit_evaluation documentation * update next major version * add missing arg and update notice
1 parent 5c62f26 commit c8c7e15

File tree

3 files changed

+9
-9
lines changed

3 files changed

+9
-9
lines changed

content/en/llm_observability/evaluations/external_evaluations.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ While LLM Observability provides a few out-of-the-box evaluations for your trace
2525

2626
## Submitting external evaluations with the SDK
2727

28-
The LLM Observability SDK provides the methods `LLMObs.submit_evaluation_for()` and `LLMObs.export_span()` to help your traced LLM application submit external evaluations to LLM Observability. See the [Python][3] or [Node.js][4] SDK documentation for more details.
28+
The LLM Observability SDK provides the methods `LLMObs.submit_evaluation()` and `LLMObs.export_span()` to help your traced LLM application submit external evaluations to LLM Observability. See the [Python][3] or [Node.js][4] SDK documentation for more details.
2929

3030
### Example
3131

@@ -97,4 +97,4 @@ You can use the evaluations API provided by LLM Observability to send evaluation
9797
[1]: /metrics/custom_metrics/#naming-custom-metrics
9898
[2]: /llm_observability/setup/api/?tab=model#evaluations-api
9999
[3]: /llm_observability/setup/sdk/python/#evaluations
100-
[4]: /llm_observability/setup/sdk/nodejs/#evaluations
100+
[4]: /llm_observability/setup/sdk/nodejs/#evaluations

content/en/llm_observability/evaluations/submit_nemo_evaluations.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ To integrate Datadog's LLM Observability with NeMo Evaluator, submit your NeMo e
4747
In the snippet above, `span_context` is a dictionary containing `span_id` and `trace_id`.
4848

4949

50-
2. **Prepare your outputs file**. In this example, the outputs file is named `outputs.json`.
50+
2. **Prepare your outputs file**. In this example, the outputs file is named `outputs.json`.
5151

5252
{{< highlight json "hl_lines=7">}}
5353
[
@@ -135,7 +135,7 @@ To integrate Datadog's LLM Observability with NeMo Evaluator, submit your NeMo e
135135
continue
136136

137137
LLMObs.submit_evaluation(
138-
span_context={
138+
span={
139139
"trace_id": meta['trace_id'],
140140
"span_id": meta['span_id']
141141
},
@@ -165,4 +165,4 @@ You can view a breakdown of your NeMo Evaluator's model evaluation results in LL
165165

166166
[1]: /llm_observability/setup/sdk/python
167167
[2]: /llm_observability/cluster_map
168-
[3]: https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
168+
[3]: https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html

content/en/llm_observability/instrumentation/sdk.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1985,13 +1985,13 @@ llmCall = llmobs.wrap({ kind: 'llm', name: 'invokeLLM', modelName: 'claude', mod
19851985

19861986
{{< tabs >}}
19871987
{{% tab "Python" %}}
1988-
`LLMObs.submit_evaluation_for()` can be used to submit your custom evaluation associated with a given span.
1988+
`LLMObs.submit_evaluation()` can be used to submit your custom evaluation associated with a given span.
19891989

1990-
<div class="alert alert-info"><code>LLMObs.submit_evaluation</code> is deprecated and will be removed in ddtrace 3.0.0. As an alternative, use <code>LLMObs.submit_evaluation_for</code>.</div>
1990+
<div class="alert alert-info"><code>LLMObs.submit_evaluation_for</code> is deprecated and will be removed in the next major version of ddtrace (4.0). To migrate, rename your <code>LLMObs.submit_evaluation_for</code> calls with <code>LLMObs.submit_evaluation</code>.</div>
19911991

19921992
**Note**: Custom evaluations are evaluators that you implement and host yourself. These differ from out-of-the-box evaluations, which are automatically computed by Datadog using built-in evaluators. To configure out-of-the-box evaluations for your application, use the [**LLM Observability** > **Settings** > **Evaluations**][1] page in Datadog.
19931993

1994-
The `LLMObs.submit_evaluation_for()` method accepts the following arguments:
1994+
The `LLMObs.submit_evaluation()` method accepts the following arguments:
19951995

19961996
{{% collapse-content title="Arguments" level="h4" expanded=false id="submit-evals-arguments" %}}
19971997
`label`
@@ -2053,7 +2053,7 @@ def llm_call():
20532053
tags = {'msg_id': msg_id}
20542054
)
20552055

2056-
LLMObs.submit_evaluation_for(
2056+
LLMObs.submit_evaluation(
20572057
span_with_tag_value = {
20582058
"tag_key": "msg_id",
20592059
"tag_value": msg_id

0 commit comments

Comments
 (0)