-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Work around bug in bigcache. #259
Conversation
Bigcache will use arbitrary amounts of memory if HardMaxCacheSize is less than the number of shards. Work around this by setting max size to the number of shards if it's too small. This means we can't have a cache smaller than 4MB.
WalkthroughThe pull request modifies the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
pkg/uhttp/gocache.go (2)
210-215
: LGTM! Consider adding more context in comments.The fix correctly addresses the bigcache memory consumption issue by ensuring MaxSize is at least equal to the number of shards. This prevents the integer division from resulting in zero and causing unlimited shard sizes.
Consider expanding the comment to include:
- Link to the bigcache issue/PR if one exists
- Why 4MB is a safe minimum (relates to 4 shards)
- Impact of this workaround on cache behavior
- // BigCache's config.maximumShardSizeInBytes does integer division, which returns zero if there are more shards than megabytes. - // Zero means unlimited cache size on each shard, so max size is effectively ignored. - // Work around this bug by increasing the max size to the number of shards. (4, so 4MB) + // Work around a bigcache memory consumption bug: + // 1. BigCache's config.maximumShardSizeInBytes does integer division of MaxSize(MB) by number of shards + // 2. When MaxSize < Shards (4), the division returns zero + // 3. Zero means unlimited size for each shard, causing potential memory issues + // 4. Fix: Ensure MaxSize is at least 4MB (matching 4 shards) to maintain size limits + // Note: This ensures each shard gets 1MB minimum, preventing unlimited growth
210-215
: Add tests to prevent regression.Consider adding tests to verify the cache size behavior:
- Test with MaxSize < Shards
- Test with MaxSize = Shards
- Test with MaxSize > Shards
Would you like me to help generate these test cases?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
pkg/uhttp/gocache.go
(1 hunks)
🔇 Additional comments (1)
pkg/uhttp/gocache.go (1)
210-215
: Verify the root cause in bigcache source.
Let's confirm the integer division behavior in bigcache that causes this issue.
✅ Verification successful
Integer division behavior in bigcache confirmed
The source code in vendor/github.com/allegro/bigcache/v3/config.go
confirms the integer division issue:
maxShardSize = convertMBToBytes(c.HardMaxCacheSize) / c.Shards
When HardMaxCacheSize
(in MB) is less than the number of shards, this division will indeed result in zero, making the workaround necessary.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the integer division behavior in bigcache's maximumShardSizeInBytes calculation
# Search for the relevant code in bigcache
ast-grep --pattern 'func (c Config) maximumShardSizeInBytes() int {
$$$
}'
# Also check for any related issues/PRs
gh api graphql -f query='
{
search(query: "repo:allegro/bigcache memory unlimited size", type: ISSUE, first: 5) {
nodes {
... on Issue {
title
url
state
}
... on PullRequest {
title
url
state
}
}
}
}'
Length of output: 1206
Bigcache will use arbitrary amounts of memory if HardMaxCacheSize is less than the number of shards.
Work around this by setting max size to the number of shards if it's too small. This means we can't have a cache smaller than 4MB.
Summary by CodeRabbit
Bug Fixes
Refactor