fix(logs): improve cache tooltip display when 5m/1h breakdown unavail…#445
Conversation
…able - Add smart inference for cache creation display based on cacheTtlApplied - When only total cache tokens available, display under 5m or 1h based on TTL - Add cacheCreationInputTokens prop to ErrorDetailsDialog for billing details - Fixes ding113#444 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary of ChangesHello @Hwwwww-dev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses an issue where cache token breakdown (5m/1h) was not displayed correctly when only the total cache creation tokens were available. It introduces a more robust display mechanism that infers the appropriate 5m or 1h category based on the applied cache TTL, ensuring that users always see relevant cache write information in both log tooltips and detailed billing dialogs. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request improves the display of cache token usage by inferring values when a detailed breakdown is unavailable. The logic correctly implements the intended behavior. However, the changes introduce some complex and duplicated code in the UI components. My review focuses on refactoring this new logic to improve readability and maintainability, making the code easier to manage in the future.
| {((cacheCreation5mInputTokens ?? 0) > 0 || | ||
| ((cacheCreationInputTokens ?? 0) > 0 && cacheTtlApplied !== "1h")) && ( | ||
| <div className="flex justify-between"> | ||
| <span className="text-muted-foreground"> | ||
| {t("logs.billingDetails.cacheWrite5m")}: | ||
| </span> | ||
| <span className="font-mono"> | ||
| {formatTokenAmount(cacheCreation5mInputTokens)} tokens{" "} | ||
| <span className="text-orange-600">(1.25x)</span> | ||
| {formatTokenAmount( | ||
| (cacheCreation5mInputTokens ?? 0) > 0 | ||
| ? cacheCreation5mInputTokens | ||
| : cacheCreationInputTokens | ||
| )}{" "} | ||
| tokens <span className="text-orange-600">(1.25x)</span> | ||
| </span> | ||
| </div> | ||
| )} | ||
| {(cacheCreation1hInputTokens ?? 0) > 0 && ( | ||
| {((cacheCreation1hInputTokens ?? 0) > 0 || | ||
| ((cacheCreationInputTokens ?? 0) > 0 && cacheTtlApplied === "1h")) && ( | ||
| <div className="flex justify-between"> | ||
| <span className="text-muted-foreground"> | ||
| {t("logs.billingDetails.cacheWrite1h")}: | ||
| </span> | ||
| <span className="font-mono"> | ||
| {formatTokenAmount(cacheCreation1hInputTokens)} tokens{" "} | ||
| <span className="text-orange-600">(2x)</span> | ||
| {formatTokenAmount( | ||
| (cacheCreation1hInputTokens ?? 0) > 0 | ||
| ? cacheCreation1hInputTokens | ||
| : cacheCreationInputTokens | ||
| )}{" "} | ||
| tokens <span className="text-orange-600">(2x)</span> | ||
| </span> | ||
| </div> | ||
| )} |
There was a problem hiding this comment.
The logic for displaying the 5m and 1h cache write tokens is duplicated and makes the component harder to read. To improve readability and maintainability, you can use a self-invoking function for each block. This encapsulates the logic, removes repetition, and makes the rendering logic cleaner.
{(() => {
const tokens =
(cacheCreation5mInputTokens ?? 0) > 0
? cacheCreation5mInputTokens
: cacheTtlApplied !== "1h"
? cacheCreationInputTokens
: null;
if ((tokens ?? 0) <= 0) return null;
return (
<div className="flex justify-between">
<span className="text-muted-foreground">
{t("logs.billingDetails.cacheWrite5m")}:
</span>
<span className="font-mono">
{formatTokenAmount(tokens)} tokens{" "}
<span className="text-orange-600">(1.25x)</span>
</span>
</div>
);
})()}
{(() => {
const tokens =
(cacheCreation1hInputTokens ?? 0) > 0
? cacheCreation1hInputTokens
: cacheTtlApplied === "1h"
? cacheCreationInputTokens
: null;
if ((tokens ?? 0) <= 0) return null;
return (
<div className="flex justify-between">
<span className="text-muted-foreground">
{t("logs.billingDetails.cacheWrite1h")}:
</span>
<span className="font-mono">
{formatTokenAmount(tokens)} tokens{" "}
<span className="text-orange-600">(2x)</span>
</span>
</div>
);
})()}
| {formatTokenAmount( | ||
| (log.cacheCreation5mInputTokens ?? 0) > 0 | ||
| ? log.cacheCreation5mInputTokens | ||
| : log.cacheTtlApplied !== "1h" | ||
| ? log.cacheCreationInputTokens | ||
| : 0 | ||
| )} | ||
| </div> | ||
| <div className="pl-2"> | ||
| 1h: {formatTokenAmount(log.cacheCreation1hInputTokens)} | ||
| 1h:{" "} | ||
| {formatTokenAmount( | ||
| (log.cacheCreation1hInputTokens ?? 0) > 0 | ||
| ? log.cacheCreation1hInputTokens | ||
| : log.cacheTtlApplied === "1h" | ||
| ? log.cacheCreationInputTokens | ||
| : 0 | ||
| )} |
There was a problem hiding this comment.
The logic to determine the token amount for 5m and 1h cache writes uses nested ternary operators, which are difficult to read and maintain. This exact logic is also duplicated in src/app/[locale]/dashboard/logs/_components/virtualized-logs-table.tsx.
To improve code quality, I recommend extracting this logic into a shared helper function. For example, you could create a function in a utility file:
// in a utils file
export function getDisplayCacheTokens(log, type: '5m' | '1h') {
if (type === '5m') {
if ((log.cacheCreation5mInputTokens ?? 0) > 0) {
return log.cacheCreation5mInputTokens;
}
return log.cacheTtlApplied !== '1h' ? log.cacheCreationInputTokens : 0;
}
// type === '1h'
if ((log.cacheCreation1hInputTokens ?? 0) > 0) {
return log.cacheCreation1hInputTokens;
}
return log.cacheTtlApplied === '1h' ? log.cacheCreationInputTokens : 0;
}Using this helper would make the component code much cleaner and prevent logic duplication.
| {formatTokenAmount( | ||
| (log.cacheCreation5mInputTokens ?? 0) > 0 | ||
| ? log.cacheCreation5mInputTokens | ||
| : log.cacheTtlApplied !== "1h" | ||
| ? log.cacheCreationInputTokens | ||
| : 0 | ||
| )} | ||
| </div> | ||
| <div className="pl-2"> | ||
| 1h: {formatTokenAmount(log.cacheCreation1hInputTokens)} | ||
| 1h:{" "} | ||
| {formatTokenAmount( | ||
| (log.cacheCreation1hInputTokens ?? 0) > 0 | ||
| ? log.cacheCreation1hInputTokens | ||
| : log.cacheTtlApplied === "1h" | ||
| ? log.cacheCreationInputTokens | ||
| : 0 | ||
| )} |
There was a problem hiding this comment.
This component duplicates the complex logic for calculating cache token amounts from usage-logs-table.tsx. Using nested ternary operators here makes the code difficult to read. This is a good opportunity for refactoring. As mentioned in the review for usage-logs-table.tsx, please consider creating a single, shared helper function in a utility file to handle this calculation. This will improve maintainability and readability by centralizing the logic.
There was a problem hiding this comment.
Code Review Summary
No significant issues identified in this PR. The implementation correctly adds smart inference for cache creation display when the 5m/1h breakdown is unavailable, using the cacheTtlApplied field to determine which category to display the total cache tokens under.
PR Size: S
- Lines changed: 62
- Files changed: 3
Review Coverage
- Logic and correctness - Clean
- The fallback logic correctly handles all scenarios:
- When breakdown (5m/1h) is available, uses it
- When only total (
cacheCreationInputTokens) is available, infers based oncacheTtlApplied
- The fallback logic correctly handles all scenarios:
- Security (OWASP Top 10) - Clean
- Error handling - Clean (display-only logic, no error handling required)
- Type safety - Clean (new prop properly typed as
number | null) - Documentation accuracy - Clean (Chinese comment accurately describes the prop)
- Test coverage - No tests exist for these UI components (pre-existing)
- Code clarity - Good (consistent pattern across all 3 files)
Notes
- The ternary logic for cache token display is repeated in 3 places but is consistent and follows the existing codebase patterns
- The change is purely presentational and does not affect billing calculations or data persistence
Automated review by Claude AI
Summary
Improves cache tooltip display in the logs page when the 5-minute/1-hour cache token breakdown is unavailable, using smart inference based on
cacheTtlAppliedto provide meaningful data.Problem
When upstream providers return only
cache_creation_input_tokenswithout the detailed breakdown (cache_creation.ephemeral_5m_input_tokensorephemeral_1h_input_tokens), the tooltip displays "5m: 0" and "1h: 0" even though there are actual cache creation tokens recorded.Related Issues:
Solution
Implement smart inference for cache creation display based on
cacheTtlApplied:cacheCreationInputTokensis available (no 5m/1h breakdown), display the total under the appropriate category (5m or 1h) based on the applied TTLChanges
Core Changes
usage-logs-table.tsxvirtualized-logs-table.tsxerror-details-dialog.tsxcacheCreationInputTokensprop and inference logic for billing details modalLogic Applied
Screenshots
Checklist
Description enhanced by Claude AI