Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM Semantic Conventions: Initial PR #825
LLM Semantic Conventions: Initial PR #825
Changes from all commits
8fe6a5f
a521fc1
5843c65
0203aea
0891f91
bdc1982
d5a9753
0ef1c1b
fd57c6c
c80b80c
fdb3ba4
a844a28
e1b1d6a
902ac95
5c6df3e
28d6100
bcc3473
b7ddb90
7745569
e8129d0
42551ce
57aaf77
e49c3db
cef4ca2
3265778
c0fdb9b
9b25c20
fa15a8f
677c86a
61ffd91
ddcd1ce
7f8f1e8
74426de
87cbd17
7662655
94ee6ea
d5d5dab
3672d94
68ad466
f1fe748
0c058cb
17c4d01
d2e4bef
4ec72e5
c755d78
a8ebe22
4ee7433
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Semantic Conventions for LLM requests
Status: Experimental
A request to an LLM is modeled as a span in a trace.
Span kind: MUST always be
CLIENT
.The span name SHOULD be set to a low cardinality value describing an operation made to an LLM. For example, the API name such as Create chat completion could be represented as
ChatCompletions gpt-4
to include the API and the LLM.Configuration
Instrumentations for LLMs MAY capture prompts and completions. Instrumentations that support it, MUST offer the ability to turn off capture of prompts and completions. This is for three primary reasons:
LLM Request attributes
These attributes track input data and metadata for a request to an LLM. Each attribute represents a concept that is common to most LLMs.
gen_ai.request.model
gpt-4
Required
gen_ai.system
openai
Required
gen_ai.request.max_tokens
100
Recommended
gen_ai.request.temperature
0.0
Recommended
gen_ai.request.top_p
1.0
Recommended
gen_ai.response.finish_reasons
[stop]
Recommended
gen_ai.response.id
chatcmpl-123
Recommended
gen_ai.response.model
gpt-4-0613
Recommended
gen_ai.usage.completion_tokens
180
Recommended
gen_ai.usage.prompt_tokens
100
Recommended
[1]: The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
[2]: If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project. If the instrumetnation reports any attributes specific to a custom model, the value provided in the
gen_ai.system
SHOULD match the custom attribute namespace segment. For example, ifgen_ai.system
is set tothe_best_llm
, custom attributes should be added in thegen_ai.the_best_llm.*
namespace. If none of above options apply, the instrumentation should set_OTHER
.[3]: If available. The name of the LLM serving a response. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
Events
In the lifetime of an LLM span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation.
The event name MUST be
gen_ai.content.prompt
.gen_ai.prompt
[{'role': 'user', 'content': 'What is the capital of France?'}]
Conditionally Required
if and only if corresponding event is enabled[1]: It's RECOMMENDED to format prompts as JSON string matching OpenAI messages format
The event name MUST be
gen_ai.content.completion
.gen_ai.completion
[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]
Conditionally Required
if and only if corresponding event is enabled[1]: It's RECOMMENDED to format completions as JSON string matching OpenAI messages format