-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LBP: Return Option<Shard> instead of Shard #969
Conversation
|
It looks like tests are not too happy with my changes... |
Ok, I see what is changed - previously deduplication worked by accident, because tests set shard count to 1 (so deduplication worked because shards were always equal). I'm also thinking about moving shard randomizing to another layer so that Plan is easier to use in tests. |
Using struct DefaultPolicyTargetComparator {
host_id: Uuid,
shard: Option<Shard>,
}
impl PartialEq for DefaultPolicyTargetComparator {
fn eq(&self, other: &Self) -> bool {
match (self.shard, other.shard) {
(_, None) | (None, _) => self.host_id.eq(&other.host_id),
(Some(shard_left), Some(shard_right)) => {
self.host_id.eq(&other.host_id) && shard_left.eq(&shard_right)
}
}
}
}
impl Eq for DefaultPolicyTargetComparator {}
impl Hash for DefaultPolicyTargetComparator {
fn hash<H: Hasher>(&self, state: &mut H) {
self.host_id.hash(state);
}
} Fixes the test. This implements the previous semantics: we deduplicate sharded targets if they are fully equal (node and shard) and we deduplicate non-sharded targets if their node is equal to node of one of the previous targets. My question is, just to make sure: Is this the semantics we want? I guess it makes sense for larger cluster, but for e.g. a 3-node cluster with RF=3 we may want to query other shard on replica nodes. right now the plan will just have 3 elements. |
For now I added the aforementioned comparator, this should bring back previous semantics if I'm not mistaken. I'd really like some feedback on that. |
80d648f
to
2589fa7
Compare
When now I think of it,
I think we should consider extending
You meant other targets, didn't you? If so, I think I agree. The fallback plan should give plenty of choices (targets) to try, provided the user has a relatively big number of allowed retries set. Another point of this discussion is then: how many targets?
|
OTOH, nobody ever complained about our plans containing too few targets. |
Right, we should rename those things and adjust the documentation. I'll open an issue about it.
I'm not sure how would you want this to work. Plan is a list of targets. If
No, I meant nodes. If you have cluster with 10 nodes, and RF=3, then after trying 3 initial targets you probably want to try one of the other nodes first, before going to other shards on 3 already queried nodes.
Is the size a problem? Plan is a iterator anyway, is there any scenario where we need to materialize whole plan?
Makes sense.
Well, nobody will complain if the driver is too resilient to harsh conditions. |
Don't forget about
This was the idea I had.
Ah, right. I fully agree.
Yes, please. |
cluster: &'a ClusterData, | ||
) -> Option<(NodeRef<'a>, Shard)> { | ||
) -> Option<(NodeRef<'a>, Option<Shard>)> { | ||
self.report_node(Report::LoadBalancing); | ||
cluster.get_nodes_info().iter().next().map(|node| (node, 0)) | ||
cluster | ||
.get_nodes_info() | ||
.iter() | ||
.next() | ||
.map(|node| (node, None)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test semantics are changed here: from deterministic to random. This, however, does not break the tests that use this struct, because they only check which node was queried (the exact shard is irrelevant for them).
c877af8
to
2b565ab
Compare
|
2b565ab
to
5a8adce
Compare
This was already documented as such, but due to an oversight the code was in disagreement with documentation. Approach from the documentation is better, because the currently implemented approach prevented deduplication in Plan from working correctly.
Co-authored-by: Wojciech Przytuła <wojciech.przytula@scylladb.com>
As there is a brilliant `std::iter::from_iter()` function that creates a new iterator based on a closure, it can be used instead of verbose boilerplate incurred by introducing IteratorWithSkippedNodes.
`with_pseudorandom_shard` and `FixedOrderLoadBalancer` are no longer used. No need to keep them around.
5a8adce
to
e90f102
Compare
This was already documented as such, but due to an oversight the code was in disagreement with documentation. Approach from the documentation is better, because the currently implemented approach prevented deduplication in Plan from working correctly.
I didn't change deduplication in DefaultPolicy, so currently for unprepared statements everything is deduplicated, and for prepared statements:
This makes sense to me: if none of the replicas responded, then something is very wrong. We can try different nodes, but we might as well try different shards on replica nodes. I don't see any big benefits from either approach, but I might be wrong.
In DefaultPolicy code there is a lot of places where
impl Fn...
is passed as a way to filter nodes. I changed some of the types so that:Shard
instead ofOption<Shard>
. On the contrary, if a function operated only on nodes, its filter now accepts justNodeRef
.&NodeRef
- NodeRef is alread just a reference and isCopy
, so this gets rid of unnecessary indirection.When reviewing I'd suggest focusing on
Shard
vsOption<Shard>
vs absence of shard,&NodeRef
vsNodeRef
,&(NodeRef, Shard)
vsNodeRef, Shard
etcFixes: #967
Pre-review checklist
./docs/source/
.Fixes:
annotations to PR description.