-
Notifications
You must be signed in to change notification settings - Fork 376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#2995 followups #3193
#2995 followups #3193
Conversation
76a8e05
to
5ba12be
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3193 +/- ##
==========================================
+ Coverage 89.74% 91.31% +1.57%
==========================================
Files 122 129 +7
Lines 101921 117718 +15797
Branches 101921 117718 +15797
==========================================
+ Hits 91467 107495 +16028
+ Misses 7768 7591 -177
+ Partials 2686 2632 -54 ☔ View full report in Codecov by Sentry. |
@@ -1479,11 +1479,21 @@ where | |||
pending_peer_connected_events.shrink_to(10); // Limit total heap usage | |||
} | |||
|
|||
let res = intercepted_msgs.into_iter().map(|ev| handler.handle_event(ev)).collect::<Vec<_>>(); | |||
drop_handled_events_and_abort!(self, res, 0, self.pending_intercepted_msgs_events); | |||
if intercepted_msgs.len() == 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drop_handled_events_and_abort
only actually cares about having an iterator at the second argument, so we should be able to very marginally tweak it to just take an iterator rather than a vec. Ideally we could even have MultiResultFuturePoller
return an iterator rather than a vec, but we'll probably need to wait for another decade of rustc features....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, we discussed the use of an iterator over in #2995: we'd still need to take two iterators, one for the full collection and one 'skipped' iterator, which would mean preparing them outside of the macro, or moving the any
check out of the macro.
I'm not the biggest fan of either option, at that point the macro would only hold 1-5 lines of code and we might as well break it and just handle the special cases inline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually not sure why we want two iterators? If a ConnectionNeeded
fails, we won't replay it, so why bother returning early or doing the error path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhh, I found it a bit weird to completely ignore the user-returned error and proceed, even if we didn't replay the events. But yeah, it allows us to take a single iterator, i.e., avoid the allocation and also the need to iterate twice over the results. Now added a fixup.
d8aa982
to
0a47516
Compare
0a47516
to
3fe7c65
Compare
Excuse the delay here, finally addressed the outstanding commentl. Let me know if I can squash. |
Please squash, yea. |
3fe7c65
to
c5c2cb1
Compare
Squashed without further changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM just needs notifies.
c5c2cb1
to
53a616b
Compare
Force-pushed with added notifies, although had to rebase on |
Btw, I still think we might need to add a sleep (preferably with back-off?) in the BP as always immediately notifying (e.g. on persistence failure) might result in ~busy waiting if the failure reason persists, no? |
It does, though it should usually be fine - writes shouldn't immediately fail unless we're like out of disk space, at which point we should really panic and crash not keep trying. Mostly the errors I assume will be used by remote persistence, which will naturally sleep while we try to connect to the host that isn't responding. Those that use it cause of things like out of space will suffer, but I'm not sure how much we can do to save them - we could sleep 1ms to avoid the busy-wait but their app is now malfunctioning and not working despite appearing kinda normal :/. |
Yeah, I agree that for starters the current approach should be fine, although I think there are several improvements we could consider doing as follow-ups / when we discover users really require them. Another one would be that we now kind of require event handling idempotence which might not always be trivial to assert without keeping some state on to what extent you already handled an event previously. I could imagine that somewhere down the line users might benefit from the introduction of a unique event id, for example. In any case, going ahead and landing this for now. |
Closes #3191.
We address two minor follow-ups for #2995.