-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(js): llama-index-ts llm support #829
base: llama-index-ts
Are you sure you want to change the base?
Conversation
const LLMMetadataAttr: Attributes = {}; | ||
|
||
LLMMetadataAttr[SemanticConventions.LLM_MODEL_NAME] = LLMObjectMetadata.model; | ||
// TODO: How to prevent conflicts with input.additionalChatOptions? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
referencing line 187: https://github.com/Arize-ai/openinference/pull/829/files#r1714284980
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah if you are accumulating attributes for a specific key (in this case invocation parameters) you probably need to change the structure a bit -
function getInvocationParams({stuffThatHasParams1, stuffThatHasParams2}) {
const firstParams = someFunc(stuffThatHasParams1)
const otherParams = someFunc2(stuffThatHasParams2)
return safelyJSONStringify({...firstParams, ...otherParams})
}
This is just an exmple but something like this, unfortunately probably have to switch things up sinc eyou really only wanna stringify at the end once all are accumulated
js/packages/openinference-instrumentation-llama-index/src/utils.ts
Outdated
Show resolved
Hide resolved
|
||
if (isLLMPrototype(prototype)) { | ||
// eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
this._wrap(prototype, "chat", (original): any => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the return type any here just because the variations on the types of "chat" tha can come from the different classes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming you're referring to the "any" return type. The return type would be the function of (this: unknown, params: LLMChatParamsNonStreaming<object, object>) => Promise<ChatResponse<object>>
. Should I assign this to a type variable and specify a strict return type here?
js/packages/openinference-instrumentation-llama-index/src/utils.ts
Outdated
Show resolved
Hide resolved
js/packages/openinference-instrumentation-llama-index/src/utils.ts
Outdated
Show resolved
Hide resolved
js/packages/openinference-instrumentation-llama-index/src/types.ts
Outdated
Show resolved
Hide resolved
js/packages/openinference-instrumentation-llama-index/src/utils.ts
Outdated
Show resolved
Hide resolved
js/packages/openinference-instrumentation-llama-index/src/utils.ts
Outdated
Show resolved
Hide resolved
js/packages/openinference-instrumentation-llama-index/test/llamaIndex.test.ts
Show resolved
Hide resolved
// @ts-expect-error the response type is not correct - this is just for testing | ||
async (): Promise<unknown> => { | ||
return CHAT_RESPONSE; | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just curious is there a way to make it correct - totally fine to fallback to this if it's too much to fully mock, but if it's a fairly simple response we should try to get it right
"Context information is below.\n" + | ||
"---------------------\n" + | ||
"lorem ipsum\n" + | ||
"---------------------\n" + | ||
"Given the context information and not prior knowledge, answer the query.\n" + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for this portion of the query where does it come from? is this just injected from llama index? did you plug in a prompt template somewhere here? I see the document content and the query below bu curious about the rest
setSpanAttributes({ | ||
span: span, | ||
attributes: safelyGetDocumentAttributes(result), | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New way to set span attributes and check if safely... method returns null. Is this a good way to go about this?
@@ -50,7 +54,7 @@ export class LlamaIndexInstrumentation extends InstrumentationBase< | |||
protected init(): InstrumentationModuleDefinition<typeof llamaindex> { | |||
const module = new InstrumentationNodeModuleDefinition<typeof llamaindex>( | |||
"llamaindex", | |||
[">=0.1.0"], | |||
[">=0.5.0"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this correct?
closes #615