-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add documentation about LlamaIndex #695
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The style guide flagged several spelling errors that seemed like false positives. We skipped posting inline suggestions for the following words:
- LLMs
Deploying logfire-docs with Cloudflare Pages
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #695 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 137 137
Lines 10920 10929 +9
Branches 1520 1524 +4
=========================================
+ Hits 10920 10929 +9 ☔ View full report in Codecov by Sentry. |
|
||
# Get response | ||
response = query_engine.query('How can I use Pydantic models?') | ||
print(str(response)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should it show something below this? 😄 Not sure what the response is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally this seems good, but should we be instrumenting the things LlamaIndex does that aren't just making calls to the LLM providers?
Like, should we be instrumenting/discussing how to instrument the web page reading to build the index? Is there any instrumentation for llama_index that goes beyond the instrumentation of the LLM calls?
It would be nice to explain/show what sorts of things will show up in logfire if you enable the LLM instrumentation and use llama_index, like, what do you see in logfire as a result of that query_engine.query
call?
But I mean I think this is a reasonable starting point, I don't see a reason to delay merging this, even if we can add more.
**Logfire** supports instrumenting calls to different LLMs with one extra line of code. | ||
Since LlamaIndex supports multiple LLMs, you can use **Logfire** to instrument calls to those LLMs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**Logfire** supports instrumenting calls to different LLMs with one extra line of code. | |
Since LlamaIndex supports multiple LLMs, you can use **Logfire** to instrument calls to those LLMs. | |
The way we recommend instrumenting LlamaIndex is to instrument the underlying LLM, and to rely on that instrumentation to ensure that calls made by LlamaIndex end up in Logfire. | |
**Logfire** supports instrumenting calls to different LLMs with one extra line of code. | |
Since LlamaIndex supports multiple LLMs, you can use **Logfire** to instrument calls to those LLMs. |
I think we can be a bit more explicit here that we aren't specifically instrumenting LlamaIndex, we are just relying on its usage of something that is instrumented. That said, if there is an actual LlamaIndex instrumentation maybe we should add that. Not sure if there is or if the only out-of-the-box thing available is the LLM instrumentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.