feat: add role-specific statistics to telemetry and UI (cont. #15234)#18824
feat: add role-specific statistics to telemetry and UI (cont. #15234)#18824
Conversation
Summary of ChangesHello @yunaseoul, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the observability of LLM interactions within the system by introducing a robust mechanism to categorize and track model usage based on specific roles. This change provides a more detailed breakdown of how different parts of the application utilize LLMs, enabling better analysis of performance, cost, and overall system behavior. The updated UI components will visually present these new metrics, offering users and developers clearer insights into the underlying model operations. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Size Change: +6.85 kB (+0.03%) Total Size: 24.4 MB
ℹ️ View Unchanged
|
There was a problem hiding this comment.
Code Review
This pull request introduces the concept of LlmRole to categorize and track LLM calls based on their purpose (e.g., main agent, subagent, utility tools like summarizers or routers). The changes involve defining the LlmRole enum, integrating it into various generateContent and generateJson calls across the codebase (including GeminiChat, BaseLlmClient, ContentGenerator implementations, and specific UI components like TriageDuplicates and TriageIssues), and updating telemetry events (ApiRequestEvent, ApiErrorEvent, ApiResponseEvent) to include this role information. Crucially, the ModelStatsDisplay and StatsDisplay UI components are enhanced to visualize these role-based metrics, showing a breakdown of requests, tokens, and latency per role for each model, with new test cases added to verify this functionality and handle long role names.
2e787ca to
3099f93
Compare
packages/cli/src/ui/components/__snapshots__/ModelStatsDisplay.test.tsx.snap
Outdated
Show resolved
Hide resolved
3099f93 to
3d3897a
Compare
3d3897a to
a1470a3
Compare

This PR picks up the work from #15234.
Summary
Add role-specific statistics to

/stats modeldemo:
Details
LlmRoleenum to classify LLM interactions (e.g., main agent, subagents, various utility functions like summarization, routing, autocomplete, etc.).LlmRole.ModelStatsDisplayto visually present these role-specific LLM usage breakdowns, offering a more insightful view of model activity.LlmRoleparameter has been consistently integrated across numerous core components, utility functions, and client calls that interact with LLMs, ensuring comprehensive data collection.UI changes are minimal and just intended as a starting point for UX discussions. Core work is the migration of all the code in packages/core to use the new tracking.
Related Issues
Fixes #14538
How to Validate
Exercise various models and trigger subagents, autocomplete, and other ways Gemini CLI uses models. Make sure they are attributed correctly.
When reviewing the code in package/core make sure that we are never setting a default value for LlmRole. My instructions for the plan for Gemini CLI was to require all code to pass in a valid LlmRole value so that static analysis could catch any issues where it wasn't passed in but there was one case I had to catch manually where Gemini CLI added a default value to make it easier for tests to pass that introduced a bug where tokens that weren't from the main event loop were tagged as main event loop.
Pre-Merge Checklist