-
Notifications
You must be signed in to change notification settings - Fork 803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add max-chunks-bytes-per-query limiter #4216
Changes from 1 commit
8e544d8
e7fd8d6
8011feb
3120569
99c9fbc
d823cf8
df08789
b89016c
0da69c8
7956a53
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -25,7 +25,7 @@ type QueryLimiter struct { | |
uniqueSeriesMx sync.Mutex | ||
uniqueSeries map[model.Fingerprint]struct{} | ||
|
||
chunkBytesCount *atomic.Int64 | ||
chunkBytesCount atomic.Int64 | ||
|
||
maxSeriesPerQuery int | ||
maxChunkBytesPerQuery int | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This limits us to 2GB (2^31 -1 bytes) per query, is it worth making this an unsigned int which is about 4GB (2^32 bytes) per query or a 64 bit number? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. int64 please. 4GB is not that much. We may have use cases setting higher limits. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. On 64-bit systems, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would be explicit like we do everywhere else. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we also pass in an int64 at the config/limit.go level? Or is leaving NewQueryLimiter(int, int) and casting the maxChunkBytes value to an int64 ok? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I don't think we're explicit "everywhere else". I think it would make sense to use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To your question Tyler, if you go with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok. Let's not block on this and keep |
||
|
@@ -38,8 +38,6 @@ func NewQueryLimiter(maxSeriesPerQuery, maxChunkBytesPerQuery int) *QueryLimiter | |
uniqueSeriesMx: sync.Mutex{}, | ||
uniqueSeries: map[model.Fingerprint]struct{}{}, | ||
|
||
chunkBytesCount: atomic.NewInt64(0), | ||
|
||
maxSeriesPerQuery: maxSeriesPerQuery, | ||
maxChunkBytesPerQuery: maxChunkBytesPerQuery, | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nit]
userID
.