Skip to content

Commit

Permalink
LLM Semantic Conventions: Initial PR (#825)
Browse files Browse the repository at this point in the history
Co-authored-by: Drew Robbins <drobbins@microsoft.com>
Co-authored-by: Drew Robbins <drew@drewby.com>
Co-authored-by: Liudmila Molkova <limolkova@microsoft.com>
Co-authored-by: Phillip Carter <pcarter@fastmail.com>
Co-authored-by: Patrice Chalin <chalin@users.noreply.github.com>
  • Loading branch information
6 people authored Apr 16, 2024
1 parent f12a4d3 commit 1c93c94
Show file tree
Hide file tree
Showing 11 changed files with 356 additions and 0 deletions.
22 changes: 22 additions & 0 deletions .chloggen/first-gen-ai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Use this changelog template to create an entry for release notes.
#
# If your change doesn't affect end users you should instead start
# your pull request title with [chore] or use the "Skip Changelog" label.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: new_component

# The name of the area of concern in the attributes-registry, (e.g. http, cloud, db)
component: gen-ai

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Introducing semantic conventions for GenAI clients.

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
# The values here must be integers.
issues: [327]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:
7 changes: 7 additions & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -78,4 +78,11 @@
/model/metrics/dotnet/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-dotnet-approver @open-telemetry/semconv-http-approvers
/docs/dotnet/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-dotnet-approver @open-telemetry/semconv-http-approvers

# Gen-AI semantic conventions approvers
/model/registry/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/model/metrics/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/model/trace/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/docs/gen-ai/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
/docs/attributes-registry/llm.md @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers

# TODO - Add semconv area experts
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/bug_report.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ body:
- area:feature-flag
- area:file
- area:gcp
- area:gen-ai
- area:graphql
- area:heroku
- area:host
Expand Down
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/change_proposal.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ body:
- area:feature-flag
- area:file
- area:gcp
- area:gen-ai
- area:graphql
- area:heroku
- area:host
Expand Down
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/new-conventions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ body:
- area:feature-flag
- area:file
- area:gcp
- area:gen-ai
- area:graphql
- area:heroku
- area:host
Expand Down
1 change: 1 addition & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ Semantic Conventions are defined for the following areas:
* [Exceptions](exceptions/README.md): Semantic Conventions for exceptions.
* [FaaS](faas/README.md): Semantic Conventions for Function as a Service (FaaS) operations.
* [Feature Flags](feature-flags/README.md): Semantic Conventions for feature flag evaluations.
* [Generative AI](gen-ai/README.md): Semantic Conventions for generative AI (LLM, etc.) operations.
* [GraphQL](graphql/graphql-spans.md): Semantic Conventions for GraphQL implementations.
* [HTTP](http/README.md): Semantic Conventions for HTTP client and server operations.
* [Messaging](messaging/README.md): Semantic Conventions for messaging operations and systems.
Expand Down
59 changes: 59 additions & 0 deletions docs/attributes-registry/llm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
<!--- Hugo front matter used to generate the website version of this page:
linkTitle: LLM
--->

# Large Language Model

<!-- toc -->

- [Generic LLM Attributes](#generic-llm-attributes)
- [Request Attributes](#request-attributes)
- [Response Attributes](#response-attributes)
- [Event Attributes](#event-attributes)

<!-- tocstop -->

## Generic LLM Attributes

### Request Attributes

<!-- semconv registry.gen_ai(omit_requirement_level,tag=llm-generic-request) -->
| Attribute | Type | Description | Examples | Stability |
|---|---|---|---|---|
| `gen_ai.request.max_tokens` | int | The maximum number of tokens the LLM generates for a request. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.model` | string | The name of the LLM a request is being made to. | `gpt-4` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.temperature` | double | The temperature setting for the LLM request. | `0.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.top_p` | double | The top_p sampling setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.system` | string | The name of the LLM foundation model vendor. | `openai` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

| Value | Description | Stability |
|---|---|---|
| `openai` | OpenAI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
<!-- endsemconv -->

### Response Attributes

<!-- semconv registry.gen_ai(omit_requirement_level,tag=llm-generic-response) -->
| Attribute | Type | Description | Examples | Stability |
|---|---|---|---|---|
| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `[stop]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.model` | string | The name of the LLM a response was generated from. | `gpt-4-0613` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.completion_tokens` | int | The number of tokens used in the LLM response (completion). | `180` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.prompt_tokens` | int | The number of tokens used in the LLM prompt. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
<!-- endsemconv -->

### Event Attributes

<!-- semconv registry.gen_ai(omit_requirement_level,tag=llm-generic-events) -->
| Attribute | Type | Description | Examples | Stability |
|---|---|---|---|---|
| `gen_ai.completion` | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.prompt` | string | The full prompt sent to an LLM. [2] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)

**[2]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
<!-- endsemconv -->
25 changes: 25 additions & 0 deletions docs/gen-ai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
<!--- Hugo front matter used to generate the website version of this page:
linkTitle: Generative AI
path_base_for_github_subdir:
from: tmp/semconv/docs/gen-ai/_index.md
to: gen-ai/README.md
--->

# Semantic Conventions for Generative AI systems

**Status**: [Experimental][DocumentStatus]

**Warning**:
The semantic conventions for GenAI and LLM are currently in development.
We encourage instrumentation libraries and telemetry consumers developers to
use the conventions in limited non-critical workloads and share the feedback

This document defines semantic conventions for the following kind of Generative AI systems:

* LLMs

Semantic conventions for LLM operations are defined for the following signals:

* [LLM Spans](llm-spans.md): Semantic Conventions for LLM requests - *spans*.

[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.26.0/specification/document-status.md
84 changes: 84 additions & 0 deletions docs/gen-ai/llm-spans.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
<!--- Hugo front matter used to generate the website version of this page:
linkTitle: LLM requests
--->

# Semantic Conventions for LLM requests

**Status**: [Experimental][DocumentStatus]

<!-- Re-generate TOC with `markdown-toc --no-first-h1 -i` -->

<!-- toc -->

- [Configuration](#configuration)
- [LLM Request attributes](#llm-request-attributes)
- [Events](#events)

<!-- tocstop -->

A request to an LLM is modeled as a span in a trace.

**Span kind:** MUST always be `CLIENT`.

The **span name** SHOULD be set to a low cardinality value describing an operation made to an LLM.
For example, the API name such as [Create chat completion](https://platform.openai.com/docs/api-reference/chat/create) could be represented as `ChatCompletions gpt-4` to include the API and the LLM.

## Configuration

Instrumentations for LLMs MAY capture prompts and completions.
Instrumentations that support it, MUST offer the ability to turn off capture of prompts and completions. This is for three primary reasons:

1. Data privacy concerns. End users of LLM applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some LLMs allow for extremely large context windows that end users may take full advantage of.
3. Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.

## LLM Request attributes

These attributes track input data and metadata for a request to an LLM. Each attribute represents a concept that is common to most LLMs.

<!-- semconv gen_ai.request -->
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.request.model`](../attributes-registry/llm.md) | string | The name of the LLM a request is being made to. [1] | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.system`](../attributes-registry/llm.md) | string | The name of the LLM foundation model vendor. [2] | `openai` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.max_tokens`](../attributes-registry/llm.md) | int | The maximum number of tokens the LLM generates for a request. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.temperature`](../attributes-registry/llm.md) | double | The temperature setting for the LLM request. | `0.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.top_p`](../attributes-registry/llm.md) | double | The top_p sampling setting for the LLM request. | `1.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.finish_reasons`](../attributes-registry/llm.md) | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `[stop]` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.id`](../attributes-registry/llm.md) | string | The unique identifier for the completion. | `chatcmpl-123` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response was generated from. [3] | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.completion_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM response (completion). | `180` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.prompt_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM prompt. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

**[1]:** The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.

**[2]:** If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project. If the instrumetnation reports any attributes specific to a custom model, the value provided in the `gen_ai.system` SHOULD match the custom attribute namespace segment. For example, if `gen_ai.system` is set to `the_best_llm`, custom attributes should be added in the `gen_ai.the_best_llm.*` namespace. If none of above options apply, the instrumentation should set `_OTHER`.

**[3]:** If available. The name of the LLM serving a response. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
<!-- endsemconv -->

## Events

In the lifetime of an LLM span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation.

<!-- semconv gen_ai.content.prompt -->
The event name MUST be `gen_ai.content.prompt`.

| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.prompt`](../attributes-registry/llm.md) | string | The full prompt sent to an LLM. [1] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

**[1]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
<!-- endsemconv -->

<!-- semconv gen_ai.content.completion -->
The event name MUST be `gen_ai.content.completion`.

| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.completion`](../attributes-registry/llm.md) | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
<!-- endsemconv -->

[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md
87 changes: 87 additions & 0 deletions model/registry/gen-ai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
groups:
- id: registry.gen_ai
prefix: gen_ai
type: attribute_group
brief: >
This document defines the attributes used to describe telemetry in the context of LLM (Large Language Models) requests and responses.
attributes:
- id: system
stability: experimental
type:
allow_custom_values: true
members:
- id: openai
stability: experimental
value: "openai"
brief: 'OpenAI'
brief: The name of the LLM foundation model vendor.
examples: 'openai'
tag: llm-generic-request
- id: request.model
stability: experimental
type: string
brief: The name of the LLM a request is being made to.
examples: 'gpt-4'
tag: llm-generic-request
- id: request.max_tokens
stability: experimental
type: int
brief: The maximum number of tokens the LLM generates for a request.
examples: [100]
tag: llm-generic-request
- id: request.temperature
stability: experimental
type: double
brief: The temperature setting for the LLM request.
examples: [0.0]
tag: llm-generic-request
- id: request.top_p
stability: experimental
type: double
brief: The top_p sampling setting for the LLM request.
examples: [1.0]
tag: llm-generic-request
- id: response.id
stability: experimental
type: string
brief: The unique identifier for the completion.
examples: ['chatcmpl-123']
tag: llm-generic-response
- id: response.model
stability: experimental
type: string
brief: The name of the LLM a response was generated from.
examples: ['gpt-4-0613']
tag: llm-generic-response
- id: response.finish_reasons
stability: experimental
type: string[]
brief: Array of reasons the model stopped generating tokens, corresponding to each generation received.
examples: ['stop']
tag: llm-generic-response
- id: usage.prompt_tokens
stability: experimental
type: int
brief: The number of tokens used in the LLM prompt.
examples: [100]
tag: llm-generic-response
- id: usage.completion_tokens
stability: experimental
type: int
brief: The number of tokens used in the LLM response (completion).
examples: [180]
tag: llm-generic-response
- id: prompt
stability: experimental
type: string
brief: The full prompt sent to an LLM.
note: It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
examples: ["[{'role': 'user', 'content': 'What is the capital of France?'}]"]
tag: llm-generic-events
- id: completion
stability: experimental
type: string
brief: The full response received from the LLM.
note: It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
examples: ["[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]"]
tag: llm-generic-events
Loading

0 comments on commit 1c93c94

Please sign in to comment.