Logging LLM provider request ids as gen_ai
attributes
#2174
Closed
dinmukhamedm
started this conversation in
Ideas
Replies: 3 comments 6 replies
-
Also, saw this comment in the code of semantic conventions from 6 months ago. Looks like this was planned but never executed? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Do you mean to log an ID returned by the provider? Or our own random ID? |
Beta Was this translation helpful? Give feedback.
6 replies
-
Closing in favor of #2236 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
It would be very useful for debugging purposes (e.g. with OpenAI support) to have a unique identifier to an LLM call span. Has OpenLLMetry considered adding something like
gen_ai.request.id
attribute?The biggest challenge I see with this is that they are not unified, and formatted differently across providers and even across endpoints of a single provider (e.g. completions vs assistant). Generally, there is a request-wide unique ID though, and if there is none, the attribute can obviously remain optional.
Beta Was this translation helpful? Give feedback.
All reactions