Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onion Message Mailbox #2950

Open
1 of 2 tasks
valentinewallace opened this issue Mar 19, 2024 · 3 comments
Open
1 of 2 tasks

Onion Message Mailbox #2950

valentinewallace opened this issue Mar 19, 2024 · 3 comments
Assignees

Comments

@valentinewallace
Copy link
Contributor

valentinewallace commented Mar 19, 2024

Per #2298, we want to support the "onion message mailbox" feature. I.e., we want to be able to store onion message forwards on behalf of an offline next-hop peer and forward them later when the peer come back online.

We'll need a way for the OnionMessenger to know which OM forwards to offline peers need to be stored for later, and a way to store and retrieve them when the offline peer comes back online. We need this feature so that per lightning/bolts#989, when the payer sends a held_htlc_available OM to the recipient, the recipient’s LSP will hold onto that OM for them until they come back online.

  • Basic onion message interception support
  • Fix race condition where the peer disconnects right before we go to forward the re-injected onion message, and it gets dropped. The solution here is for the PeerManager to indicate that it has received a ping/pong after sending the OM, after which we can generate another event in OnionMessenger indicating that the intercepted OM is safe to delete. This race should be relatively rare but may be more common on mobile phones where sockets get disconnected as soon as an app goes into the background.
@valentinewallace valentinewallace self-assigned this Mar 19, 2024
@valentinewallace
Copy link
Contributor Author

Discussed offline with Matt and want to get some conceptual feedback on this design to resolve the issue:

  1. User provides a whitelist of nodes for which to store OM forwards
  2. When the messenger receives an OM for a whitelisted offline peer, it will generate an Event containing the onion message. The user is then responsible for storing this onion message until the peer comes back online.
  3. When the peer comes back online, the user will provide the stored OM(s) back to the messenger for forwarding.

Rationale:

  • Using a whitelist in (1) as opposed to storing all forwards helps prevent malicious nodes from DoS’ing us with bogus forwards. Another idea was storing channel peer forwards only, but this wouldn’t work for users whose LSP intends to open a JIT channel to them, but hasn’t yet done so.
  • For (2) and (3), it’s definitely a little awkward that the user has to store OM forwards themselves and later re-inject them into LDK. However, we want to do it this way so users have the ability to more tightly integrate their OM mailbox with sending Apple/Google notifications to the recipient to come back offline, and possibly including the OM in the notification itself. Users also have more control over their persistence/which OMs get persisted this way. (@TheBlueMatt please check this explanation.)

Drawbacks:

  • Having to re-inject OMs into LDK
  • We don’t have any backpressure/rate limiting of event generation at the moment, so users will have to be smart about how they handle them unless/until we add backpressure. This is an “advanced” feature.

@arik-so @TheBlueMatt looking for conceptual feedback or ACKs on this design before putting up any code. @jkczyz feel free to weigh in too since this touches code you've worked on recently, but no pressure.

@valentinewallace valentinewallace mentioned this issue Mar 20, 2024
24 tasks
@jkczyz
Copy link
Contributor

jkczyz commented Mar 21, 2024

It would be possible to work this into OnionMessenger's existing OnionMessageRecipient buffering by adding another variant for forwarding to whitelisted peers. Currently, we use OnionMessageRecipient::PendingConnection to generate Event::ConnectionNeeded. So the new variant could be used to generate the new event and the existing buffering would mitigate DoS. Then the user could just call handle_onion_message with the contents of the event when the peer is back online.

Alternatively, you could have a larger buffer for these messages and generate an Event::ConnectionNeeded so that the messages are automatically delivered when the peer is back online. I guess the drawback is that the messages would be dropped if the node restarts or the buffer fills up? Though it puts less burden on the user as they just need to process the Event::ConnectionNeeded as normal or know to send an OS notification.

Regarding including the message in the event so the user can directly send the message with the OS notification: That does save the time waiting for the peer to connect. But in most cases won't the peer need to reconnect to respond to the message anyway? Is there any other benefits? Seems the main benefit of including the message in the event is persistence flexibility. So in a sense, we are pushing the mailbox implementation to the user, no?

@valentinewallace
Copy link
Contributor Author

Discussed offline, going with the event-based approach for now rather than storing OMs internally in LDK. We won't be using a whitelist internally in LDK either, but users should keep one and discard any OM interception events that aren't for relevant peers.

As a follow-up, we should indicate to users when an intercepted OM has been successfully forwarded and is safe to delete. Updated the issue description for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants