Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#2995 followups #3193

Merged
merged 3 commits into from
Aug 22, 2024
Merged

#2995 followups #3193

merged 3 commits into from
Aug 22, 2024

Conversation

tnull
Copy link
Contributor

@tnull tnull commented Jul 19, 2024

Closes #3191.

We address two minor follow-ups for #2995.

@tnull tnull mentioned this pull request Jul 19, 2024
1 task
@tnull tnull force-pushed the 2024-07-2995-followups branch 2 times, most recently from 76a8e05 to 5ba12be Compare July 19, 2024 09:08
@tnull tnull changed the title 2024 07 2995 followups #2995 followups Jul 19, 2024
Copy link

codecov bot commented Jul 19, 2024

Codecov Report

Attention: Patch coverage is 61.90476% with 16 lines in your changes missing coverage. Please review.

Project coverage is 91.31%. Comparing base (e4017c4) to head (53a616b).
Report is 141 commits behind head on main.

Files Patch % Lines
lightning/src/onion_message/messenger.rs 60.00% 16 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3193      +/-   ##
==========================================
+ Coverage   89.74%   91.31%   +1.57%     
==========================================
  Files         122      129       +7     
  Lines      101921   117718   +15797     
  Branches   101921   117718   +15797     
==========================================
+ Hits        91467   107495   +16028     
+ Misses       7768     7591     -177     
+ Partials     2686     2632      -54     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -1479,11 +1479,21 @@ where
pending_peer_connected_events.shrink_to(10); // Limit total heap usage
}

let res = intercepted_msgs.into_iter().map(|ev| handler.handle_event(ev)).collect::<Vec<_>>();
drop_handled_events_and_abort!(self, res, 0, self.pending_intercepted_msgs_events);
if intercepted_msgs.len() == 1 {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drop_handled_events_and_abort only actually cares about having an iterator at the second argument, so we should be able to very marginally tweak it to just take an iterator rather than a vec. Ideally we could even have MultiResultFuturePoller return an iterator rather than a vec, but we'll probably need to wait for another decade of rustc features....

Copy link
Contributor Author

@tnull tnull Jul 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, we discussed the use of an iterator over in #2995: we'd still need to take two iterators, one for the full collection and one 'skipped' iterator, which would mean preparing them outside of the macro, or moving the any check out of the macro.

I'm not the biggest fan of either option, at that point the macro would only hold 1-5 lines of code and we might as well break it and just handle the special cases inline?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually not sure why we want two iterators? If a ConnectionNeeded fails, we won't replay it, so why bother returning early or doing the error path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mhh, I found it a bit weird to completely ignore the user-returned error and proceed, even if we didn't replay the events. But yeah, it allows us to take a single iterator, i.e., avoid the allocation and also the need to iterate twice over the results. Now added a fixup.

@jkczyz jkczyz self-requested a review July 19, 2024 13:49
@tnull tnull force-pushed the 2024-07-2995-followups branch 4 times, most recently from d8aa982 to 0a47516 Compare July 22, 2024 07:30
@tnull
Copy link
Contributor Author

tnull commented Aug 21, 2024

Excuse the delay here, finally addressed the outstanding commentl. Let me know if I can squash.

@TheBlueMatt
Copy link
Collaborator

Please squash, yea.

@tnull tnull force-pushed the 2024-07-2995-followups branch from 3fe7c65 to c5c2cb1 Compare August 21, 2024 18:19
@tnull
Copy link
Contributor Author

tnull commented Aug 21, 2024

Please squash, yea.

Squashed without further changes.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM just needs notifies.

lightning/src/onion_message/messenger.rs Show resolved Hide resolved
lightning/src/onion_message/messenger.rs Show resolved Hide resolved
@tnull tnull force-pushed the 2024-07-2995-followups branch from c5c2cb1 to 53a616b Compare August 21, 2024 19:58
@tnull
Copy link
Contributor Author

tnull commented Aug 21, 2024

Force-pushed with added notifies, although had to rebase on main to make the notifier available.

@tnull
Copy link
Contributor Author

tnull commented Aug 21, 2024

Btw, I still think we might need to add a sleep (preferably with back-off?) in the BP as always immediately notifying (e.g. on persistence failure) might result in ~busy waiting if the failure reason persists, no?

@TheBlueMatt
Copy link
Collaborator

TheBlueMatt commented Aug 21, 2024

It does, though it should usually be fine - writes shouldn't immediately fail unless we're like out of disk space, at which point we should really panic and crash not keep trying. Mostly the errors I assume will be used by remote persistence, which will naturally sleep while we try to connect to the host that isn't responding. Those that use it cause of things like out of space will suffer, but I'm not sure how much we can do to save them - we could sleep 1ms to avoid the busy-wait but their app is now malfunctioning and not working despite appearing kinda normal :/.

@tnull
Copy link
Contributor Author

tnull commented Aug 22, 2024

It does, though it should usually be fine - writes shouldn't immediately fail unless we're like out of disk space, at which point we should really panic and crash not keep trying. Mostly the errors I assume will be used by remote persistence, which will naturally sleep while we try to connect to the host that isn't responding. Those that use it cause of things like out of space will suffer, but I'm not sure how much we can do to save them - we could sleep 1ms to avoid the busy-wait but their app is now malfunctioning and not working despite appearing kinda normal :/.

Yeah, I agree that for starters the current approach should be fine, although I think there are several improvements we could consider doing as follow-ups / when we discover users really require them.

Another one would be that we now kind of require event handling idempotence which might not always be trivial to assert without keeping some state on to what extent you already handled an event previously. I could imagine that somewhere down the line users might benefit from the introduction of a unique event id, for example.

In any case, going ahead and landing this for now.

@tnull tnull merged commit 0d7ae86 into lightningdevkit:main Aug 22, 2024
19 of 21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

#2995 Followups
4 participants