-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed a bug resulting in high number of PartitionKeyRange ReadFeed requests when using bulk execution not using Spark connector. #37920
Conversation
…out SparkConnector
…to users/fabianm/PKRangeFeedInvestigation
API change check API changes are not detected in this pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks
…to users/fabianm/PKRangeFeedInvestigation
/azp run java - cosmos - tests |
/azp run java - cosmos - spark |
Azure Pipelines successfully started running 1 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @FabianMeiswinkel
Description
BulkExecutor when outside of Spark was using ContainerAsync.getFeedRanges to get physical partition information - this call would always force the call to be sent to the service (instead of using the local cache). The fix will instead use the cached PKRanges in bulk.
All SDK Contribution checklist:
General Guidelines and Best Practices
Testing Guidelines