-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subpartitioning: Fixes handling of split physical partitions #3879
Merged
10 commits merged into
master
from
users/nalutripician/subpartitioningChangeFeedIteratorFix
Jun 12, 2023
Merged
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
af580e9
Initial Commit DO NOT REVIEW
NaluTripician ef246ba
bug fix, needs Direct Package Changes
NaluTripician 7c1dad1
Merge branch 'master' of https://github.com/Azure/azure-cosmos-dotnet…
NaluTripician afa3228
fix for change feed and query plus tests
NaluTripician 4eb9c14
clean up
NaluTripician cc82322
query + clean up
NaluTripician b5a5a23
Merge branch 'master' into users/nalutripician/subpartitioningChangeF…
NaluTripician 8ec68c7
Merge branch 'master' into users/nalutripician/subpartitioningChangeF…
NaluTripician a03832c
Merge branch 'master' into users/nalutripician/subpartitioningChangeF…
kirankumarkolli 1c4dfd5
Merge branch 'master' into users/nalutripician/subpartitioningChangeF…
NaluTripician File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its already answered for Matias, need to spend little more time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kirankumarkolli - the question above is whether it is expected to use a range with min and max inclusive? I think for a PartitionKeyFeedRange it has always been the case - but it is a good question - in Java we normalize to EPKs (always Min inclusive / max Exclusive) in query, changefeed etc. and wherever we need EPKs. And we also created a helper function to create a NormalizedEPK (with min Inclusive/max Exclusive) even publicly in the Spark Connector. So, agreed - would be good to discuss whether we should just always use normalized EPK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A PK value however (single value) it is min/max inclusive with the min/max being the PK value hash
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ealsur that's my take-away from last Nalu update.
Where-as EPK -> Physical partition ownership is [Min, Max).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kirankumarkolli @ealsur Hi, was the conclusion about this that it is OK, or did it somehow slip through the review?
We are experiencing issue with the same symptoms that are discussed in this PR in the latest version of the SDK (3.38.1) and in #3934 (SDK stuck in an infinite loop of updating
pkranges
after partition split) and I can see in the current code of the SDK that both are still inclusive.I can also see the same thing in
CosmosQueryClientCore.GetCachedContainerQueryPropertiesAsync
We are getting 410 Gone from the
pkranges
requests after partition split happened on our DB which puts the SDK into an infinite loop.So I am wondering if it could be caused by this
We will create a new issue for it, just collecting any possibly related facts... please let me know if this is a bad clue or a good one