Conversation
Fixes #685 This commit addresses the GitHub API rate limit issue caused by excessive worktree PR status fetching. ## Changes ### Server-side PR caching (list.ts) - Added `GitHubPRCacheEntry` interface and `githubPRCache` Map - Implemented 2-minute TTL cache for GitHub PR data - Modified `fetchGitHubPRs()` to check cache before making API calls - Added `forceRefresh` parameter to bypass cache when explicitly requested - Cache is properly cleared when force refresh is triggered ### Frontend polling reduction (worktree-panel.tsx) - Increased worktree polling interval from 5 seconds to 30 seconds - Reduces polling frequency by 6x while keeping UI reasonably fresh - Updated comment to reflect new polling strategy ### Type improvements (use-worktrees.ts) - Fixed `fetchWorktrees` callback signature to accept `silent` option - Returns proper type for removed worktrees detection ## Impact - Combined ~12x reduction in GitHub API calls - 2-minute cache prevents repeated API hits during normal operation - 30-second polling balances responsiveness with API conservation - Force refresh option allows users to manually update when needed 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Summary of ChangesHello @Shironex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses and resolves the issue of GitHub API rate limiting by implementing a comprehensive strategy to reduce the frequency of API calls. It introduces server-side caching for GitHub PR data and significantly extends the frontend polling interval. These changes collectively lead to a substantial reduction in API requests, ensuring the application operates within GitHub's rate limits while maintaining a responsive user experience. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds a per-project GitHub PR cache with TTL and a force-refresh option on the server, updates the UI worktrees hook to return removed worktrees and accept a silent option, and increases the UI polling interval from 5s to 30s. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as "UI / Client"
participant Server as "Server (worktree list)"
participant Cache as "Server Cache (githubPRCache)"
participant GH as "GitHub / gh CLI"
UI->>Server: Request worktree list (includeDetails, forceRefreshGitHub?)
alt includeDetails requested
Server->>Cache: Check githubPRCache[projectPath]
alt cache valid && !forceRefresh
Cache-->>Server: Return cached PRs
else
Server->>GH: Fetch PRs (gh pr list / GraphQL)
alt GH returns PRs
GH-->>Server: PRs
Server->>Cache: Store PRs + fetchedAt
else GH fails
alt Cache has stale entry
Cache-->>Server: Return stale PRs
else
Server-->>UI: Return empty PRs (warn)
end
end
end
end
Server-->>UI: Worktree list (with PR info)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request effectively addresses the GitHub API rate-limiting issue by introducing server-side caching for PR data and reducing the frontend polling frequency. The changes are well-structured and directly tackle the root causes of the problem. The type improvements in the use-worktrees hook are also a welcome addition for maintainability. I have one suggestion regarding the cache invalidation logic to make it more resilient to API flakiness, but overall this is a solid fix.
…d API calls Address PR #688 review feedback: previously the cache was deleted before fetch, causing repeated API calls if the fetch failed. Now the cache entry is preserved and stale data is returned on failure, preventing unnecessary API calls during GitHub API flakiness or temporary outages. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/server/src/routes/worktree/routes/list.ts`:
- Around line 248-259: When a fetch fails and you return stale cached PRs,
update the cache's fetchedAt to postpone TTL and avoid immediate retries; in the
catch block where you check cached, set or update githubPRCache for projectPath
so fetchedAt becomes Date.now() (or apply a simple backoff window) while keeping
prs as cached.prs, then return cached.prs (use symbols githubPRCache,
projectPath, cached, fetchedAt, getErrorMessage to locate the change).
🧹 Nitpick comments (1)
apps/server/src/routes/worktree/routes/list.ts (1)
190-204: Consider moving PR cache/fetch logic into a service layer.This adds more business logic to a route handler; moving
fetchGitHubPRs+ cache toservices/would align with the server architecture and keep routes thin. As per coding guidelines, consider delegating this logic fromroutes/toservices/.
Address PR #688 review feedback from CodeRabbit: When a GitHub PR fetch fails and we return stale cached data, also update the fetchedAt timestamp. This prevents the original TTL from expiring and causing every subsequent poll to retry the failing request, which would still hammer GitHub during API outages. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/server/src/routes/worktree/routes/list.ts`:
- Around line 254-263: When a fetch for GitHub PRs fails and there is no
existing cached entry, add a short negative-cache entry to githubPRCache for the
given projectPath (e.g., store prs: new Map() or empty map and fetchedAt:
Date.now()) so subsequent polls use the negative cache instead of hammering
GitHub; update the error logging that uses getErrorMessage(error) as before, and
ensure the negative TTL is short (e.g., seconds-to-minutes) by relying on the
same cache expiry mechanism so normal polling resumes after the window; make
this change in the same failure branch where you currently log "Failed to fetch
GitHub PRs" so callers of the function that reads githubPRCache will see the
empty map until expiry.
🧹 Nitpick comments (1)
apps/server/src/routes/worktree/routes/list.ts (1)
193-203: Consider normalizingprojectPathfor the PR cache key.
If callers pass path variants (trailing slashes, symlinks), you’ll get duplicate cache entries and extra GitHub calls.♻️ Suggested tweak
- const cached = githubPRCache.get(projectPath); + const cacheKey = normalizePath(projectPath); + const cached = githubPRCache.get(cacheKey); @@ - if (!forceRefresh && cached && now - cached.fetchedAt < GITHUB_PR_CACHE_TTL_MS) { + if (!forceRefresh && cached && now - cached.fetchedAt < GITHUB_PR_CACHE_TTL_MS) { return cached.prs; } @@ - githubPRCache.set(projectPath, { + githubPRCache.set(cacheKey, { prs: prMap, fetchedAt: Date.now(), }); @@ - githubPRCache.set(projectPath, { prs: cached.prs, fetchedAt: Date.now() }); + githubPRCache.set(cacheKey, { prs: cached.prs, fetchedAt: Date.now() }); return cached.prs; }
| // On fetch failure, return stale cached data if available to avoid | ||
| // repeated API calls during GitHub API flakiness or temporary outages | ||
| if (cached) { | ||
| logger.warn(`Failed to fetch GitHub PRs, returning stale cache: ${getErrorMessage(error)}`); | ||
| // Extend cache TTL to avoid repeated retries during outages | ||
| githubPRCache.set(projectPath, { prs: cached.prs, fetchedAt: Date.now() }); | ||
| return cached.prs; | ||
| } | ||
| // No cache available, log warning and return empty map | ||
| logger.warn(`Failed to fetch GitHub PRs: ${getErrorMessage(error)}`); |
There was a problem hiding this comment.
Avoid retry storms when there’s no cached PR data.
If the first fetch fails (rate limit/outage) and there’s no cache, every poll will still hit GitHub. Consider negative-caching an empty map for a short window to suppress repeated retries.
🛠️ Suggested fix
if (cached) {
logger.warn(`Failed to fetch GitHub PRs, returning stale cache: ${getErrorMessage(error)}`);
// Extend cache TTL to avoid repeated retries during outages
githubPRCache.set(projectPath, { prs: cached.prs, fetchedAt: Date.now() });
return cached.prs;
}
// No cache available, log warning and return empty map
logger.warn(`Failed to fetch GitHub PRs: ${getErrorMessage(error)}`);
+ const empty = new Map<string, WorktreePRInfo>();
+ githubPRCache.set(projectPath, { prs: empty, fetchedAt: Date.now() });
+ return empty;🤖 Prompt for AI Agents
In `@apps/server/src/routes/worktree/routes/list.ts` around lines 254 - 263, When
a fetch for GitHub PRs fails and there is no existing cached entry, add a short
negative-cache entry to githubPRCache for the given projectPath (e.g., store
prs: new Map() or empty map and fetchedAt: Date.now()) so subsequent polls use
the negative cache instead of hammering GitHub; update the error logging that
uses getErrorMessage(error) as before, and ensure the negative TTL is short
(e.g., seconds-to-minutes) by relying on the same cache expiry mechanism so
normal polling resumes after the window; make this change in the same failure
branch where you currently log "Failed to fetch GitHub PRs" so callers of the
function that reads githubPRCache will see the empty map until expiry.
Summary
This PR fixes the GitHub API rate limiting issue caused by excessive worktree PR status fetching (#685).
Problem
The app was fetching GitHub PR status too frequently, causing rate limit errors:
gh pr listcommandSolution
Server-side PR Caching (list.ts)
githubPRCachewith 2-minute TTLFrontend Polling Reduction (worktree-panel.tsx)
Type Improvements (use-worktrees.ts)
fetchWorktreessignature to properly handle silent optionImpact
Testing
Closes
Closes #685
🤖 Generated with Claude Code
Summary by CodeRabbit
Performance Improvements
Developer-facing API
✏️ Tip: You can customize this high-level summary in your review settings.