Skip to content

Conversation

@emasab
Copy link
Contributor

@emasab emasab commented Dec 19, 2025

Continuation of #418 with this different approach that allows to

  • multiply 1 by #concurrency even when the round up gives zero
  • keep the logic about rounding up to next multiple of (batch size * concurrency) separate

Copilot AI review requested due to automatic review settings December 19, 2025 10:50
@emasab emasab requested review from a team as code owners December 19, 2025 10:50
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses a potential edge case where the message cache size could be calculated as zero, leading to fetch failures. The fix ensures that each worker has at least one message in the cache while maintaining the existing round-up logic.

Key changes:

  • Introduces a Math.max() check to guarantee a minimum of 1 message per worker before applying concurrency multiplier
  • Refactors the rounding logic into a separate #roundUpToNearestMultipleOfMaxBatchesSize method for better code organization
  • Updates CHANGELOG to document the fix

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
lib/kafkajs/_consumer.js Adds minimum message guarantee and extracts rounding logic into helper method
CHANGELOG.md Documents the bug fix for low consumption rate scenarios

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@airlock-confluentinc airlock-confluentinc bot force-pushed the dev/fix_zero_cache_max_size branch from 2389348 to aa9d606 Compare December 19, 2025 12:08
1,
// Messages needed for `#maxCacheSizePerWorkerMs` milliseconds of consumption
// per worker
Math.round(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use Math.ceil here instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, not a big change to always take the ceil given it's also rounding up to next multiple of maxBatchSize later

Copy link
Member

@pranavrth pranavrth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!.

@emasab emasab merged commit fab45d2 into master Dec 19, 2025
2 checks passed
@emasab emasab deleted the dev/fix_zero_cache_max_size branch December 19, 2025 18:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants