Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal PartitionKeyRangeHandler : Adds code to expose bug #1937

Closed

Conversation

bchong95
Copy link
Contributor

Internal PartitionKeyRangeHandler : Adds code to expose bug

Description

This code reverses the priority of epk ranges. It now favors the parent epk range over the child. If PartitionKeyRangeHandler was working properly it would return a split exception and and the FeedRangeIteratorCore class would handle it. Currently it just sends the request to the first of the children and pretends nothing went wrong. This is a bug that will be caught in integration tests.

Related to :
#1935

Copy link
Member

@ealsur ealsur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't see the point of merging a code change to force an integration bug or break the behavior of components just to point at a design issue on one of the components. Rather propose a refactoring that you think it's better and improve the current contracts instead?

@@ -182,16 +182,14 @@ public override async Task<ResponseMessage> ReadNextAsync(CancellationToken canc
streamPayload: stream,
requestEnricher: request =>
{
// FeedRangeContinuationRequestMessagePopulatorVisitor needs to run before FeedRangeRequestMessagePopulatorVisitor,
// since they both set EPK range headers and in the case of split we need to run on the child range and not the parent range.
FeedRangeRequestMessagePopulatorVisitor feedRangeVisitor = new FeedRangeRequestMessagePopulatorVisitor(request);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is a bug in PartitionKeyRangeHandler, but rather the way FeedRange and FeedRangeContinuatoin work.

FeedRange describes the current range, but due to the backend current design, when a split happens, FeedRangeContinuation will have independent continuations for subranges (each representing one partition). So the order of precedence should be FeedRangeContinuation first (in case it has a subrange selection aftersplit) and then FeedRange after.

So the EPK range sent through the wire is the one defined by FeedRangeContinuation.

When the backend supports EPK filtering, the FeedRangeContinuation can then be simply the Etag continuation, without any subrange list, so it won't set the EPK range. The range will then be set by the FeedRange visitor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you read the source code it is clear that the code is reading the epk range to try and route to the correct partition. This code existed well before FeedRange and FeedRangeContinuation, so I am not sure why you are mentioning these things.

@bchong95
Copy link
Contributor Author

Don't see the point of merging a code change to force an integration bug or break the behavior of components just to point at a design issue on one of the components. Rather propose a refactoring that you think it's better and improve the current contracts instead?

There is no refactoring needed. From the issue linked it is clear that the code just needs to checked the epk range set in the request and honor it. It's a bug at the end of the day and nothing more.

@bchong95
Copy link
Contributor Author

This code path no longer exists due to this PR:

#1947

@bchong95 bchong95 closed this Oct 30, 2020
@ghost
Copy link

ghost commented Dec 15, 2021

Closing due to in-activity, pease feel free to re-open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants