-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature]: Availability of UpdateAddHTLC ExtraData Across Restarts #7297
Comments
Mostly opening this to surface the information - feel free to close immediately! tl;dr:
(I think, could easily have missed something glaringly obvious) |
Rather than just store the extra data, would think that we'd actually just parse out what we need in a structured manner to store elsewhere. IMO, any items in the extra data (TLV) should be considered required given they're attached to messages that drive the main channel state machine.
Would have thought this were just part of the onion, or the EC point at the start would have be overloaded for this purpose 🤔. Otherwise the HMAC (w/ the Associated Data) wouldn't cover this data, like it does the amt+cltv+payment hash, could potentially a new free replay attack vector. |
Yeah def, as we start to need specific fields pull them out and save them rather than try to store all the tlvs in all the places.
Meaning create a totally new bucket and just drop what we need in there?
The first blinding point is in the onion payload (for the "introductory" node), then relaying nodes receive it in |
Probably a dump question, but can we ask the peer to send again? I think we do similar things for commitments, that based on the indexes, we may resend the last commitment, might as well filter out a list of htlcs and resend again(or we already do this now? need double check). If we want to survive a restart, I think we could just save the logs into With the extra data, agree with laolu here that we need a new bucket for it. I think this would be an extra layer on top of lnwallet.
I think it's generally a good idea to use tlv.
I think this would be nice to avoid duplicate storage of the same info, but aren't they removed once the forwarding is finished? Nice docs btw(why I'm not surprised🤓). I also drafted several docs here a while back and plan to change the confusing variable names. |
This would need a protocol change for something that's working fine (we're just not storing it), which I don't think we want (+ would ne non-trivial).
iirc, both of these buckets are cleared out once we've locked in the new commitments, so once we've locked in the incoming + outgoing commitment we will no longer have access to the extra data? For things like reputation/upfront fees, we'll definitley need that info after that point.
If this were considered an acceptable approach, I'd def say that this is the path of least resistance. It seems a little taped together to not store it with
Yeah, wouldn't be my recommended approach but it would work if desparate.
❤️ same to you! Looks like a study group in the making if ya'll are still into that! |
I think either the new bucket, or just the migration itself (leaning towarrds the migration). The migration may also be useful in the future for things like PTLCs, that'll require us to store more than just the payment hash per HTLC. The migration here may also be a bit tricky: we use the struct in a few different locations, at times storing something else directly after it within a stream. Ideally we can do the migration in an isolated change, so we can minimize the review surface area to make sure we get things right the first time around. |
Same, feels like it's somewhat inevitable? Better bite the bullet early rather than have a bunch of ancillary buckets that would need to be migrated/accounted for as legacy.
def, will probably be quite a heavy migration since we'll need to overhaul every commitment for every channel :( |
If we decide to go with the migration path, I think we could generalize the methods created here, where an FYI will take a deeper look into the blinded path to provide more meaningful feedback here. |
Any further thoughts on this @yyforyongyu @Roasbeef? #7376 can work without this (with some workarounds), but I think it's far cleaner to bite the bullet and do the migration. If there are no objections, I'll start working on some code for it? |
@carlaKC sorry it's been a while, will get back to you next week! |
How would the migration look in terms of code complexity? That being said, my feeling as that a new bucket for the extra data is not bad at all. I think that has been done before, and we can just add a subbucket nested under the existing one that hosts the TLV data (and existing HTLCs will just have no data in this bucket). |
Code complexity pretty straightforward IMO, just inline encoding -> TLVs. The concern is more about the volume of data to be migrated - every HTLC, in every commitment, in every channel 😅
My original feeling was that we might as well bite the bullet since we'll want more flexible structure for PTLCs (eventually). But if we're reticient to do a big migration and @Roasbeef is okay with the extra bucket then I'm happy with that way (not something we haven't done before). |
Opened up #7710 to demonstrate a nifty workaround for stashing extra data inline with the HTLCs (h/t @Roasbeef for the suggestion). @yyforyongyu WDTY? I think this hits the nice inbetween of not being too hacky and saving us the trauma of a big migration. Also looked into storing HTLCs in an extra bucket, but we store a lot of data inline (and in a few places) so it would need to be quite far away from HTLC serialization, so I don't think that's a great direction. |
This is now closed by #7710 ? |
Indeed! |
Background
The
UpdateAddHTLC
message in LND is extended with ExtraData which allows spec upgrades to transmit extra fields via a TLV stream. LND currently stores the extra data associated with a HTLC in our LogUpdates (where we store the full wire message), but not in channeldb.HTLC in our state machine. Understandably so, because there are no TLVs in use that need these fields and storing the full blob in the state machine would be unnecessary bloat.Use Cases
There are a few upcoming specification proposals that start to use these fields:
blinding_point
is transmitted to peers as anUpdateAddHTLC
tlv.UpdateAddHTLC
tlv.UpdateAddHTLC
tlv.Availability of Extra Data
While working on route blinding, we started to take a look at the availability of this data across restarts (since it's not stored in our state machine). Using the terminology outlined in the spec, this section outlines the availability of data at various stages of the HTLC lifecycle and how we can recover it after restart (it at all).
This example walks through the full HTLC lifecycle forwarded from A -> B -> C from the perspective of B.
Incoming HTLC (A -> B)
Pending on the receiver
We have received an
UpdateAddHTLC
from our remote peer.On Restart: We will forget the HTLC, because we do not ACKed it, and expect our remote peer to replay it.
In the receiver commitment:
We have received a
CommitmentSignature
from the remote peer that includes the HTLC.On Restart: We will forget the HTLC, because we have not ACKed it, and expect our remote peer to replay it.
Irrevocably committed by the receiver:
We have sent a
RevokeAndAck
to the remote peer that revokes the state without the HTLC.channeldb.HTLC
On Restart:
channeldb.HTLC
which strips those fields.🔧 Workaround: We can restore this data from our
unsignedAckedUpdates
(see route blinding example)In sender commitment
We have sent the remote peer a
CommitSignature
that includes the incoming HTLC.CommitDiff
for the signed remote commitment is persisted in the openChannelBucket under the commitDIffKey. This diff includes the incoming HTLC stored as achanneldb.HTLC
On Restart: As above, we restore the HTLC to our
remoteUpdateLog
from ourlocalCommitment
. We can also use the workaround described above to restore any additional data attached to the HTLC.Irrevocably committed by sender
We have received a
RevokeAndAck
from the remote peer that revokes state without the HTLC.**channeldb.HTLC
LogUpdate
for the incoming HTLC has been stores in our forwarding package underfwdPackagesKey
On Restart:
LogUpdate
unsignedAckedUpdates
. This is not a problem for forwarding the HTLC, but will become an issue if we want to use the data on HTLC settle or fail.Outgoing HTLC (B -> C)
Pending on the receiver
We have sent an
UpdateAddHTLC
to our remote peer.On Restart:
In the receiver's commitment
We have sent a
CommitmentSignature
to the remote peer that includes the HTLC.On Restart:
Irrevocably committed by the receiver
We have received a
RevokeAndAck
from the remote peer that revokes the state without the HTLC.On restart:
localUpdateLog
from the remote commitment, so no extra data will be present.channeldb.HTLC
which strips those fields. This is not a problem for forwarding the HTLC, since we have already sent the peer anUpdateAddHTLC
, but will be an issue if we want to interact with the data on settle/fail.From this point onwards, we lose the extra data associated with the HTLC after restart. If you
like complex switches and you cannot liewant to see the details of how I walked through this all, my full notes on the life and times of a HTLC in lnd are here!Options to Explore
channeldb.HTLC
to a TLV stream, then store specific fields as they come into use in the protocol.The text was updated successfully, but these errors were encountered: