Skip to content

fix(logs): improve cache tooltip display when 5m/1h breakdown unavail…#445

Merged
ding113 merged 1 commit intoding113:devfrom
Hwwwww-dev:fix/issue-444-cache-tooltip-display
Dec 25, 2025
Merged

fix(logs): improve cache tooltip display when 5m/1h breakdown unavail…#445
ding113 merged 1 commit intoding113:devfrom
Hwwwww-dev:fix/issue-444-cache-tooltip-display

Conversation

@Hwwwww-dev
Copy link
Contributor

@Hwwwww-dev Hwwwww-dev commented Dec 24, 2025

Summary

Improves cache tooltip display in the logs page when the 5-minute/1-hour cache token breakdown is unavailable, using smart inference based on cacheTtlApplied to provide meaningful data.

Problem

When upstream providers return only cache_creation_input_tokens without the detailed breakdown (cache_creation.ephemeral_5m_input_tokens or ephemeral_1h_input_tokens), the tooltip displays "5m: 0" and "1h: 0" even though there are actual cache creation tokens recorded.

Related Issues:

Solution

Implement smart inference for cache creation display based on cacheTtlApplied:

  • When only total cacheCreationInputTokens is available (no 5m/1h breakdown), display the total under the appropriate category (5m or 1h) based on the applied TTL
  • If TTL is "1h", show total under 1h cache; otherwise show under 5m cache
  • When specific 5m/1h values exist, use those values (existing behavior preserved)

Changes

Core Changes

File Purpose
usage-logs-table.tsx Add smart cache token inference to tooltip display
virtualized-logs-table.tsx Same tooltip logic for virtualized table variant
error-details-dialog.tsx Add cacheCreationInputTokens prop and inference logic for billing details modal

Logic Applied

// For 5m display:
(cacheCreation5mInputTokens ?? 0) > 0
  ? cacheCreation5mInputTokens
  : cacheTtlApplied !== "1h"
    ? cacheCreationInputTokens  // Use total when TTL is not 1h
    : 0

// For 1h display:
(cacheCreation1hInputTokens ?? 0) > 0
  ? cacheCreation1hInputTokens
  : cacheTtlApplied === "1h"
    ? cacheCreationInputTokens  // Use total when TTL is 1h
    : 0

Screenshots

image image image image

Checklist

  • Code follows project conventions
  • Self-review completed
  • No breaking changes
  • Both virtualized and non-virtualized table variants updated
  • Billing details dialog updated for consistency

Description enhanced by Claude AI

…able

- Add smart inference for cache creation display based on cacheTtlApplied
- When only total cache tokens available, display under 5m or 1h based on TTL
- Add cacheCreationInputTokens prop to ErrorDetailsDialog for billing details
- Fixes ding113#444

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Hwwwww-dev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue where cache token breakdown (5m/1h) was not displayed correctly when only the total cache creation tokens were available. It introduces a more robust display mechanism that infers the appropriate 5m or 1h category based on the applied cache TTL, ensuring that users always see relevant cache write information in both log tooltips and detailed billing dialogs.

Highlights

  • Improved Cache Display Logic: Implemented smart inference for displaying cache creation tokens in tooltips and billing details. When specific 5m/1h breakdown data is unavailable, the total cache creation tokens are now intelligently allocated to either the 5m or 1h category based on the cacheTtlApplied value.
  • New Prop for Billing Details: Introduced cacheCreationInputTokens as a new prop to the ErrorDetailsDialog component, allowing for more comprehensive billing details when only total cache creation tokens are provided.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions github-actions bot added bug Something isn't working area:UI labels Dec 24, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the display of cache token usage by inferring values when a detailed breakdown is unavailable. The logic correctly implements the intended behavior. However, the changes introduce some complex and duplicated code in the UI components. My review focuses on refactoring this new logic to improve readability and maintainability, making the code easier to manage in the future.

Comment on lines +426 to 457
{((cacheCreation5mInputTokens ?? 0) > 0 ||
((cacheCreationInputTokens ?? 0) > 0 && cacheTtlApplied !== "1h")) && (
<div className="flex justify-between">
<span className="text-muted-foreground">
{t("logs.billingDetails.cacheWrite5m")}:
</span>
<span className="font-mono">
{formatTokenAmount(cacheCreation5mInputTokens)} tokens{" "}
<span className="text-orange-600">(1.25x)</span>
{formatTokenAmount(
(cacheCreation5mInputTokens ?? 0) > 0
? cacheCreation5mInputTokens
: cacheCreationInputTokens
)}{" "}
tokens <span className="text-orange-600">(1.25x)</span>
</span>
</div>
)}
{(cacheCreation1hInputTokens ?? 0) > 0 && (
{((cacheCreation1hInputTokens ?? 0) > 0 ||
((cacheCreationInputTokens ?? 0) > 0 && cacheTtlApplied === "1h")) && (
<div className="flex justify-between">
<span className="text-muted-foreground">
{t("logs.billingDetails.cacheWrite1h")}:
</span>
<span className="font-mono">
{formatTokenAmount(cacheCreation1hInputTokens)} tokens{" "}
<span className="text-orange-600">(2x)</span>
{formatTokenAmount(
(cacheCreation1hInputTokens ?? 0) > 0
? cacheCreation1hInputTokens
: cacheCreationInputTokens
)}{" "}
tokens <span className="text-orange-600">(2x)</span>
</span>
</div>
)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for displaying the 5m and 1h cache write tokens is duplicated and makes the component harder to read. To improve readability and maintainability, you can use a self-invoking function for each block. This encapsulates the logic, removes repetition, and makes the rendering logic cleaner.

                        {(() => {
                          const tokens =
                            (cacheCreation5mInputTokens ?? 0) > 0
                              ? cacheCreation5mInputTokens
                              : cacheTtlApplied !== "1h"
                                ? cacheCreationInputTokens
                                : null;
                          if ((tokens ?? 0) <= 0) return null;
                          return (
                            <div className="flex justify-between">
                              <span className="text-muted-foreground">
                                {t("logs.billingDetails.cacheWrite5m")}:
                              </span>
                              <span className="font-mono">
                                {formatTokenAmount(tokens)} tokens{" "}
                                <span className="text-orange-600">(1.25x)</span>
                              </span>
                            </div>
                          );
                        })()}
                        {(() => {
                          const tokens =
                            (cacheCreation1hInputTokens ?? 0) > 0
                              ? cacheCreation1hInputTokens
                              : cacheTtlApplied === "1h"
                                ? cacheCreationInputTokens
                                : null;
                          if ((tokens ?? 0) <= 0) return null;
                          return (
                            <div className="flex justify-between">
                              <span className="text-muted-foreground">
                                {t("logs.billingDetails.cacheWrite1h")}:
                              </span>
                              <span className="font-mono">
                                {formatTokenAmount(tokens)} tokens{" "}
                                <span className="text-orange-600">(2x)</span>
                              </span>
                            </div>
                          );
                        })()}

Comment on lines +256 to +272
{formatTokenAmount(
(log.cacheCreation5mInputTokens ?? 0) > 0
? log.cacheCreation5mInputTokens
: log.cacheTtlApplied !== "1h"
? log.cacheCreationInputTokens
: 0
)}
</div>
<div className="pl-2">
1h: {formatTokenAmount(log.cacheCreation1hInputTokens)}
1h:{" "}
{formatTokenAmount(
(log.cacheCreation1hInputTokens ?? 0) > 0
? log.cacheCreation1hInputTokens
: log.cacheTtlApplied === "1h"
? log.cacheCreationInputTokens
: 0
)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to determine the token amount for 5m and 1h cache writes uses nested ternary operators, which are difficult to read and maintain. This exact logic is also duplicated in src/app/[locale]/dashboard/logs/_components/virtualized-logs-table.tsx.

To improve code quality, I recommend extracting this logic into a shared helper function. For example, you could create a function in a utility file:

// in a utils file
export function getDisplayCacheTokens(log, type: '5m' | '1h') {
  if (type === '5m') {
    if ((log.cacheCreation5mInputTokens ?? 0) > 0) {
      return log.cacheCreation5mInputTokens;
    }
    return log.cacheTtlApplied !== '1h' ? log.cacheCreationInputTokens : 0;
  }
  // type === '1h'
  if ((log.cacheCreation1hInputTokens ?? 0) > 0) {
    return log.cacheCreation1hInputTokens;
  }
  return log.cacheTtlApplied === '1h' ? log.cacheCreationInputTokens : 0;
}

Using this helper would make the component code much cleaner and prevent logic duplication.

Comment on lines +432 to +448
{formatTokenAmount(
(log.cacheCreation5mInputTokens ?? 0) > 0
? log.cacheCreation5mInputTokens
: log.cacheTtlApplied !== "1h"
? log.cacheCreationInputTokens
: 0
)}
</div>
<div className="pl-2">
1h: {formatTokenAmount(log.cacheCreation1hInputTokens)}
1h:{" "}
{formatTokenAmount(
(log.cacheCreation1hInputTokens ?? 0) > 0
? log.cacheCreation1hInputTokens
: log.cacheTtlApplied === "1h"
? log.cacheCreationInputTokens
: 0
)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This component duplicates the complex logic for calculating cache token amounts from usage-logs-table.tsx. Using nested ternary operators here makes the code difficult to read. This is a good opportunity for refactoring. As mentioned in the review for usage-logs-table.tsx, please consider creating a single, shared helper function in a utility file to handle this calculation. This will improve maintainability and readability by centralizing the logic.

@github-actions github-actions bot added the size/S Small PR (< 200 lines) label Dec 24, 2025
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

No significant issues identified in this PR. The implementation correctly adds smart inference for cache creation display when the 5m/1h breakdown is unavailable, using the cacheTtlApplied field to determine which category to display the total cache tokens under.

PR Size: S

  • Lines changed: 62
  • Files changed: 3

Review Coverage

  • Logic and correctness - Clean
    • The fallback logic correctly handles all scenarios:
      • When breakdown (5m/1h) is available, uses it
      • When only total (cacheCreationInputTokens) is available, infers based on cacheTtlApplied
  • Security (OWASP Top 10) - Clean
  • Error handling - Clean (display-only logic, no error handling required)
  • Type safety - Clean (new prop properly typed as number | null)
  • Documentation accuracy - Clean (Chinese comment accurately describes the prop)
  • Test coverage - No tests exist for these UI components (pre-existing)
  • Code clarity - Good (consistent pattern across all 3 files)

Notes

  • The ternary logic for cache token display is repeated in 3 places but is consistent and follows the existing codebase patterns
  • The change is purely presentational and does not affect billing calculations or data persistence

Automated review by Claude AI

@ding113 ding113 merged commit b5a7226 into ding113:dev Dec 25, 2025
11 of 13 checks passed
@github-project-automation github-project-automation bot moved this from Backlog to Done in Claude Code Hub Roadmap Dec 25, 2025
@Hwwwww-dev Hwwwww-dev deleted the fix/issue-444-cache-tooltip-display branch December 25, 2025 08:27
@github-actions github-actions bot mentioned this pull request Dec 29, 2025
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:UI bug Something isn't working size/S Small PR (< 200 lines)

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants

Comments