Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM Semantic Conventions: Initial PR #825

Merged
merged 47 commits into from
Apr 16, 2024
Merged
Show file tree
Hide file tree
Changes from 34 commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
8fe6a5f
chore: continuing work by cartermp
nirga Jan 12, 2024
a521fc1
Update to use Yaml model files
drobbins-msft Jan 22, 2024
5843c65
chore: fixes in yaml according to reviews
nirga Jan 23, 2024
0203aea
Merge branch 'main' into ai
nirga Jan 23, 2024
0891f91
chore: @lmolkova reviews
nirga Jan 12, 2024
bdc1982
Merge branch 'main' into ai
nirga Jan 28, 2024
d5a9753
Add OpenAI metrics
drewby Jan 28, 2024
0ef1c1b
Fix linting errors
drewby Jan 29, 2024
fd57c6c
Fix yamllint errors
drewby Jan 29, 2024
c80b80c
Regenerate markdown based on yaml model
drewby Jan 29, 2024
fdb3ba4
Merge pull request #2 from drewby/ai
nirga Jan 29, 2024
a844a28
Merge branch 'main' into ai
nirga Jan 30, 2024
e1b1d6a
minimal set of llm semconv
nirga Mar 19, 2024
902ac95
Merge branch 'main' into first-gen-ai
nirga Mar 19, 2024
5c6df3e
fix: prompt/completion format
nirga Mar 19, 2024
28d6100
Merge branch 'main' into first-gen-ai
nirga Mar 20, 2024
bcc3473
fix: lint and CI errors
nirga Mar 20, 2024
b7ddb90
fix: llm -> gen-ai
nirga Mar 20, 2024
7745569
fix: following @lmolkova review
nirga Mar 21, 2024
e8129d0
Merge branch 'main' into first-gen-ai
nirga Mar 22, 2024
42551ce
Update model/registry/gen-ai.yaml
nirga Mar 22, 2024
57aaf77
Update model/registry/gen-ai.yaml
nirga Mar 22, 2024
e49c3db
Update .github/CODEOWNERS
nirga Mar 22, 2024
cef4ca2
Update model/trace/gen-ai.yaml
nirga Mar 22, 2024
3265778
Update model/trace/gen-ai.yaml
nirga Mar 22, 2024
c0fdb9b
Update model/registry/gen-ai.yaml
nirga Mar 22, 2024
9b25c20
Update model/registry/gen-ai.yaml
nirga Mar 22, 2024
fa15a8f
Update model/trace/gen-ai.yaml
nirga Mar 22, 2024
677c86a
Update model/trace/gen-ai.yaml
nirga Mar 22, 2024
61ffd91
Update model/registry/gen-ai.yaml
nirga Mar 22, 2024
ddcd1ce
Merge branch 'main' into first-gen-ai
nirga Mar 22, 2024
7f8f1e8
fix: lint; regeneration
nirga Mar 22, 2024
74426de
Update docs/gen-ai/README.md
nirga Mar 22, 2024
87cbd17
fix: opt-in prompts / completions
nirga Mar 22, 2024
7662655
Update docs/gen-ai/README.md
nirga Mar 22, 2024
94ee6ea
Update docs/attributes-registry/gen-ai.md
nirga Mar 22, 2024
d5d5dab
Update model/registry/gen-ai.yaml
nirga Mar 22, 2024
3672d94
Update docs/gen-ai/README.md
nirga Mar 22, 2024
68ad466
fix: lint
nirga Mar 22, 2024
f1fe748
Apply suggestions from code review
nirga Apr 1, 2024
0c058cb
Merge branch 'main' into first-gen-ai
nirga Apr 1, 2024
17c4d01
chore: regenerated tables
nirga Apr 1, 2024
d2e4bef
chore: top-level README
nirga Apr 1, 2024
4ec72e5
Apply suggestions from code review
nirga Apr 10, 2024
c755d78
Merge branch 'main' into first-gen-ai
nirga Apr 16, 2024
a8ebe22
fix: PR reviews
nirga Apr 16, 2024
4ee7433
Merge branch 'main' into first-gen-ai
nirga Apr 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions .chloggen/first-gen-ai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Use this changelog template to create an entry for release notes.
nirga marked this conversation as resolved.
Show resolved Hide resolved
#
# If your change doesn't affect end users you should instead start
# your pull request title with [chore] or use the "Skip Changelog" label.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: new_component

# The name of the area of concern in the attributes-registry, (e.g. http, cloud, db)
component: gen-ai

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Introducing semantic conventions for LLM clients.
nirga marked this conversation as resolved.
Show resolved Hide resolved

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
# The values here must be integers.
issues: [327]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:
7 changes: 7 additions & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -78,4 +78,11 @@
/model/metrics/dotnet/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-dotnet-approver @open-telemetry/semconv-http-approvers
/docs/dotnet/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-dotnet-approver @open-telemetry/semconv-http-approvers

# Gen-AI semantic conventions approvers
/model/registry/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/model/metrics/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/model/trace/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/docs/gen-ai/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/docs/attributes-registry/gen-ai.md @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers

# TODO - Add semconv area experts
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/bug_report.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ body:
- area:error
- area:exception
- area:faas
- area:gen-ai
- area:host
- area:http
- area:k8s
Expand Down
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/change_proposal.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ body:
- area:error
- area:exception
- area:faas
- area:gen-ai
- area:host
- area:http
- area:k8s
Expand Down
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/new-conventions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ body:
- area:error
- area:exception
- area:faas
- area:gen-ai
- area:host
- area:http
- area:k8s
Expand Down
58 changes: 58 additions & 0 deletions docs/attributes-registry/gen-ai.md
nirga marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
<!--- Hugo front matter used to generate the website version of this page:
--->

# Large Language Model (LLM)
nirga marked this conversation as resolved.
Show resolved Hide resolved

<!-- toc -->

- [Generic LLM Attributes](#generic-llm-attributes)
nirga marked this conversation as resolved.
Show resolved Hide resolved
- [Request Attributes](#request-attributes)
- [Response Attributes](#response-attributes)
- [Event Attributes](#event-attributes)

<!-- tocstop -->

## Generic LLM Attributes

### Request Attributes

<!-- semconv registry.llm(omit_requirement_level,tag=llm-generic-request) -->
| Attribute | Type | Description | Examples |
|---|---|---|---|
| `gen_ai.request.max_tokens` | int | The maximum number of tokens the LLM generates for a request. | `100` |
| `gen_ai.request.model` | string | The name of the LLM a request is being made to. | `gpt-4` |
| `gen_ai.request.temperature` | double | The temperature setting for the LLM request. | `0.0` |
| `gen_ai.request.top_p` | double | The top_p sampling setting for the LLM request. | `1.0` |
| `gen_ai.system` | string | The name of the LLM foundation model vendor. | `openai` |

`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used.

| Value | Description |
|---|---|
| `openai` | OpenAI |
nirga marked this conversation as resolved.
Show resolved Hide resolved
<!-- endsemconv -->

### Response Attributes

<!-- semconv registry.llm(omit_requirement_level,tag=llm-generic-response) -->
| Attribute | Type | Description | Examples |
|---|---|---|---|
| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `[stop]` |
| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` |
| `gen_ai.response.model` | string | The name of the LLM a response is being made to. | `gpt-4-0613` |
| `gen_ai.usage.completion_tokens` | int | The number of tokens used in the LLM response (completion). | `180` |
| `gen_ai.usage.prompt_tokens` | int | The number of tokens used in the LLM prompt. | `100` |
<!-- endsemconv -->
nirga marked this conversation as resolved.
Show resolved Hide resolved

### Event Attributes

<!-- semconv registry.llm(omit_requirement_level,tag=llm-generic-events) -->
| Attribute | Type | Description | Examples |
|---|---|---|---|
| `gen_ai.completion` | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` |
| `gen_ai.prompt` | string | The full prompt sent to an LLM. [2] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` |

**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)

**[2]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
<!-- endsemconv -->
20 changes: 20 additions & 0 deletions docs/gen-ai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
<!--- Hugo front matter used to generate the website version of this page:
linkTitle: AI
nirga marked this conversation as resolved.
Show resolved Hide resolved
path_base_for_github_subdir:
from: content/en/docs/specs/semconv/ai/_index.md
nirga marked this conversation as resolved.
Show resolved Hide resolved
to: gen-ai/README.md
--->

# Semantic Conventions for AI systems
nirga marked this conversation as resolved.
Show resolved Hide resolved

**Status**: [Experimental][DocumentStatus]

This document defines semantic conventions for the following kind of AI systems:
nirga marked this conversation as resolved.
Show resolved Hide resolved

* LLMs

Semantic conventions for LLM operations are defined for the following signals:

* [LLM Spans](llm-spans.md): Semantic Conventions for LLM requests - *spans*.

[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.26.0/specification/document-status.md
84 changes: 84 additions & 0 deletions docs/gen-ai/llm-spans.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
<!--- Hugo front matter used to generate the website version of this page:
linkTitle: LLM Calls
nirga marked this conversation as resolved.
Show resolved Hide resolved
--->

# Semantic Conventions for LLM requests

**Status**: [Experimental][DocumentStatus]

<!-- Re-generate TOC with `markdown-toc --no-first-h1 -i` -->

<!-- toc -->

- [Configuration](#configuration)
- [LLM Request attributes](#llm-request-attributes)
- [Events](#events)

<!-- tocstop -->

A request to an LLM is modeled as a span in a trace.

nirga marked this conversation as resolved.
Show resolved Hide resolved
The **span name** SHOULD be set to a low cardinality value describing an operation made to an LLM.
For example, the API name such as [Create chat completion](https://platform.openai.com/docs/api-reference/chat/create)
nirga marked this conversation as resolved.
Show resolved Hide resolved

## Configuration

Instrumentations for LLMs MAY capture prompts and completions.
nirga marked this conversation as resolved.
Show resolved Hide resolved
Instrumentations that support it, MUST offer the ability to turn off capture of prompts and completions. This is for three primary reasons:

1. Data privacy concerns. End users of LLM applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemety systems. Some LLMs allow for extremely large context windows that end users may take full advantage of.
nirga marked this conversation as resolved.
Show resolved Hide resolved
3. Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.

By default, these configurations SHOULD NOT capture prompts and completions.
nirga marked this conversation as resolved.
Show resolved Hide resolved

## LLM Request attributes

These attributes track input data and metadata for a request to an LLM. Each attribute represents a concept that is common to most LLMs.

<!-- semconv gen_ai.request -->
| Attribute | Type | Description | Examples | Requirement Level |
|---|---|---|---|---|
| [`gen_ai.request.max_tokens`](../attributes-registry/gen-ai.md) | int | The maximum number of tokens the LLM generates for a request. | `100` | Recommended |
| [`gen_ai.request.model`](../attributes-registry/gen-ai.md) | string | The name of the LLM a request is being made to. [1] | `gpt-4` | Required |
| [`gen_ai.request.temperature`](../attributes-registry/gen-ai.md) | double | The temperature setting for the LLM request. | `0.0` | Recommended |
| [`gen_ai.request.top_p`](../attributes-registry/gen-ai.md) | double | The top_p sampling setting for the LLM request. | `1.0` | Recommended |
| [`gen_ai.response.finish_reasons`](../attributes-registry/gen-ai.md) | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `[stop]` | Recommended |
| [`gen_ai.response.id`](../attributes-registry/gen-ai.md) | string | The unique identifier for the completion. | `chatcmpl-123` | Recommended |
| [`gen_ai.response.model`](../attributes-registry/gen-ai.md) | string | The name of the LLM a response is being made to. [2] | `gpt-4-0613` | Conditionally Required: if response was received |
| [`gen_ai.system`](../attributes-registry/gen-ai.md) | string | The name of the LLM foundation model vendor. [3] | `openai` | Required |
| [`gen_ai.usage.completion_tokens`](../attributes-registry/gen-ai.md) | int | The number of tokens used in the LLM response (completion). | `180` | Recommended |
| [`gen_ai.usage.prompt_tokens`](../attributes-registry/gen-ai.md) | int | The number of tokens used in the LLM prompt. | `100` | Recommended |

**[1]:** The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.

**[2]:** The name of the LLM serving a response. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.

**[3]:** If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project. If the instrumetnation reports any attributes specific to a custom model, the value provided in the `gen_ai.system` SHOULD match the custom attribute namespace segment. For example, if `gen_ai.system` is set to `the_best_llm`, custom attributes should be added in the `gen_ai.the_best_llm.*` namespace. If none of above options apply, the instrumentation should set `_OTHER`.
<!-- endsemconv -->

## Events

In the lifetime of an LLM span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation.

<!-- semconv gen_ai.content.prompt -->
The event name MUST be `gen_ai.content.prompt`.

| Attribute | Type | Description | Examples | Requirement Level |
|---|---|---|---|---|
| [`gen_ai.prompt`](../attributes-registry/gen-ai.md) | string | The full prompt sent to an LLM. [1] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | Conditionally Required: if and only if corresponding event is enabled |

**[1]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
<!-- endsemconv -->

<!-- semconv gen_ai.content.completion -->
The event name MUST be `gen_ai.content.completion`.

| Attribute | Type | Description | Examples | Requirement Level |
|---|---|---|---|---|
| [`gen_ai.completion`](../attributes-registry/gen-ai.md) | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | Conditionally Required: if and only if corresponding event is enabled |

**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
nirga marked this conversation as resolved.
Show resolved Hide resolved
<!-- endsemconv -->

[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md
74 changes: 74 additions & 0 deletions model/registry/gen-ai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
groups:
nirga marked this conversation as resolved.
Show resolved Hide resolved
- id: registry.llm
nirga marked this conversation as resolved.
Show resolved Hide resolved
prefix: gen_ai
type: attribute_group
brief: >
This document defines the attributes used to describe telemetry in the context of LLM (Large Language Models) requests and responses.
attributes:
- id: system
type:
allow_custom_values: true
members:
- id: openai
value: "openai"
brief: 'OpenAI'
brief: The name of the LLM foundation model vendor.
examples: 'openai'
tag: llm-generic-request
- id: request.model
type: string
brief: The name of the LLM a request is being made to.
examples: 'gpt-4'
tag: llm-generic-request
- id: request.max_tokens
type: int
brief: The maximum number of tokens the LLM generates for a request.
examples: [100]
tag: llm-generic-request
- id: request.temperature
type: double
brief: The temperature setting for the LLM request.
examples: [0.0]
tag: llm-generic-request
- id: request.top_p
type: double
brief: The top_p sampling setting for the LLM request.
examples: [1.0]
tag: llm-generic-request
- id: response.id
type: string
brief: The unique identifier for the completion.
examples: ['chatcmpl-123']
tag: llm-generic-response
- id: response.model
type: string
brief: The name of the LLM a response is being made to.
nirga marked this conversation as resolved.
Show resolved Hide resolved
examples: ['gpt-4-0613']
tag: llm-generic-response
- id: response.finish_reasons
type: string[]
brief: Array of reasons the model stopped generating tokens, corresponding to each generation received.
examples: ['stop']
tag: llm-generic-response
- id: usage.prompt_tokens
type: int
brief: The number of tokens used in the LLM prompt.
examples: [100]
tag: llm-generic-response
- id: usage.completion_tokens
type: int
brief: The number of tokens used in the LLM response (completion).
examples: [180]
tag: llm-generic-response
- id: prompt
type: string
brief: The full prompt sent to an LLM.
note: It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
examples: ["[{'role': 'user', 'content': 'What is the capital of France?'}]"]
tag: llm-generic-events
- id: completion
type: string
brief: The full response received from the LLM.
note: It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
examples: ["[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]"]
tag: llm-generic-events
69 changes: 69 additions & 0 deletions model/trace/gen-ai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
groups:
- id: gen_ai.request
nirga marked this conversation as resolved.
Show resolved Hide resolved
type: span
brief: >
A request to an LLM is modeled as a span in a trace. The span name should be a low cardinality value representing the request made to an LLM, like the name of the API endpoint being called.
attributes:
- ref: gen_ai.system
requirement_level: required
note: >
If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project.
If the instrumetnation reports any attributes specific to a custom model, the value provided in the `gen_ai.system` SHOULD match the custom attribute namespace segment.
For example, if `gen_ai.system` is set to `the_best_llm`, custom attributes should be added in the `gen_ai.the_best_llm.*` namespace.
If none of above options apply, the instrumentation should set `_OTHER`.
- ref: gen_ai.request.model
requirement_level: required
note: >
The name of the LLM a request is being made to. If the LLM is supplied by a vendor,
then the value must be the exact name of the model requested. If the LLM is a fine-tuned
custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.request.max_tokens
requirement_level: recommended
- ref: gen_ai.request.temperature
requirement_level: recommended
- ref: gen_ai.request.top_p
requirement_level: recommended
- ref: gen_ai.response.id
requirement_level: recommended
- ref: gen_ai.response.model
requirement_level:
conditionally_required: if response was received
nirga marked this conversation as resolved.
Show resolved Hide resolved
note: >
The name of the LLM serving a response. If the LLM is supplied by a vendor,
then the value must be the exact name of the model actually used. If the LLM is a
fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.response.finish_reasons
requirement_level: recommended
- ref: gen_ai.usage.prompt_tokens
requirement_level: recommended
- ref: gen_ai.usage.completion_tokens
requirement_level: recommended
events:
- gen_ai.content.prompt
- gen_ai.content.completion

- id: gen_ai.content.prompt
name: gen_ai.content.prompt
type: event
brief: >
In the lifetime of an LLM span, events for prompts sent and completions received
may be created, depending on the configuration of the instrumentation.
attributes:
- ref: gen_ai.prompt
requirement_level:

Check failure on line 53 in model/trace/gen-ai.yaml

View workflow job for this annotation

GitHub Actions / yamllint

[trailing-spaces] trailing spaces
conditionally_required: if and only if corresponding event is enabled
note: >
It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)

- id: gen_ai.content.completion
name: gen_ai.content.completion
type: event
brief: >
In the lifetime of an LLM span, events for prompts sent and completions received
may be created, depending on the configuration of the instrumentation.
attributes:
- ref: gen_ai.completion
requirement_level:

Check failure on line 66 in model/trace/gen-ai.yaml

View workflow job for this annotation

GitHub Actions / yamllint

[trailing-spaces] trailing spaces
conditionally_required: if and only if corresponding event is enabled
note: >
It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
Loading