-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
option_simple_close
(features 60/61)
#1205
base: master
Are you sure you want to change the base?
Conversation
As described in #1096 (comment), the main question is whether we want to have stricter requirements on exchanging @Roasbeef let me know what you think: I'm currently leaning towards your initial implementation where you must receive |
I added the requirement to strictly exchange This is implemented in ACINQ/eclair#2747, waiting for lnd for cross-compatibility tests (may need to update the feature bit either on the lnd side to use 60 or on the eclair side to use 160)! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm starting to look into implementing option_simple_close
for LDK. For now I just have one question and a few nits after an initial round of review.
As pointed out by @TheBlueMatt during yesterday's spec meeting, we can greatly simplify the protocol by removing the By doing that, the protocol becomes a trivial request/response protocol where nodes send I've made those changes in aabde33 and implemented them in ACINQ/eclair#2967 and it is indeed much simpler. It should be quite simple to update an existing implementation of the previous version of the protocol to this version. I hope this will get rid of the unclear parts of the previous versions and be easier for reviewers and implementers! This protocol is compatible with taproot channels, with the following additions:
This ensures that nodes always have a pair of random nonces for their next signing round. |
When was shutdown ever persisted in the first place?
Not sure about yours, but my implementation is stateless a is between shutdown iterations. What data were you remembering between iterations?
Can you recount what the supposed issue was with the existing flow? Looked at the diff, and it looks to add more fields to the closing messages vs keeping them single purpose as they were before. Looking at the new ascii diagram, it looks like the |
Good point, we were storing the
It's not stateless because you had to remember in which state of your state machine you were (e.g. whether you've already sent and received shutdown or not, and whether you've sent The point of the last commit is that you don't need a state machine at all: everything happens in a single state, where you simply remember the last script your peer wants to use (and update it whenever you receive a
I added more fields to the closing messages but they don't require state and are trivial to include, this isn't adding any complexity:
Not at all: you still of course need |
Gotcha, this makes sense. If it's still a state machine or not is somewhat subjective and an implementation detail, but I think what's concrete here is that it no longer needs to loop back upon itself to re-enter the shutdown phase. As the responder you can just send the rely, but as the initiator you still shift between sending the reply and waiting for it to complete. Will work on updating my implementation so we can finally get this thing merged! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for clearing up the semantics related to OP_RETURN
!
Before it was a bit buried in the rational section with only a passing reference in the requirements section. I propose we make things a bit more explicit in the requirements section, add some suggestions as in-line comments.
02-peer-protocol.md
Outdated
- If it does, the output value MUST be set to zero so that all funds go to fees, as specified in [BOLT #3](03-transactions.md#closing-transaction). | ||
- If the closee's output amount is dust: | ||
- MUST set `closer_output_only`. | ||
- SHOULD NOT set `closer_and_closee_outputs`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a SHOULD NOT but not a MUST NOT given that the closee's output amount is dust?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, not sure why it was a SHOULD NOT
in rusty's initial draft. Fixed in 798bbc3
- SHOULD ignore `closing_complete`. | ||
- SHOULD send a `warning`. | ||
- SHOULD close the connection. | ||
- If `closer_scriptpubkey` is a valid `OP_RETURN` script: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't specify what to do when it's not a valid OP_RETURN
- I guess sending an error and closing the conn?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a requirement in 798bbc3
|
||
The sender of `closing_complete` (aka. "the closer"): | ||
- MUST set `fee_satoshis` to a fee less than or equal to its outstanding balance, rounded down to whole satoshis. | ||
- MUST set `fee_satoshis` so that at least one output is not dust. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still find this requirement a bit weird...since I can only influence my local output amount, and following this strictly means I need to choose a fee such that my local output is not dust. Meanwhile we allow it to be dust, as long as OP_RETURN
is used. And by using OP_RETURN
it means there's nothing to do with the fees I set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since I can only influence my local output amount
Yes but you can do it based on the remote output amount:
- if the remote output amount isn't dust, there is no issue
- if the remote output amount is dust:
- if the local output amount isn't uneconomical, there is no issue
- if the local output amount is uneconomical, then you set your script to
OP_RETURN
(or you don't sendclosing_complete
at all and wait for lower on-chain fees)
That sounds simple enough? Otherwise please suggest some changes you would find clearer.
Pay your own fees, so the peer will sign without caring. Even if it doesn't relay. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The shutdown section says: ``` - MUST NOT send multiple `shutdown` messages. ``` But the reconnection section says: ``` - upon reconnection: - if it has sent a previous `shutdown`: - MUST retransmit `shutdown`. ``` So clearly, remove the former. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
You have to give them something which will propagate. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
…output. If both are dust, you should lowball fees. The next patch adds OP_RETURN as a valid shutdown scriptpubkey though if you really want to do this. This also addresses the case where people send a new `shutdown` with a *different* scriptpubkey. This could previously cause a race where you receive a bad signature (because it hasn't received the updated shutdown), so we ignore these cases. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This gets around "but both our outputs are dust!" problems, as recommended by Anthony Towns. I hope I interpreted the standardness rules correctly (technically, you can have multiple pushes in an OP_RETURN as long as the total is under 83 bytes, but let's keep it simple). Add an explicit note that "OP_RETURN" is never considered "uneconomic". Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- Make it clear why the OP_RETURN restrictions have two forms. - Cross-reference existing dust threshold - Lots of typo fixes - Don't set closer_and_closee if we're larger/equal, and closee is dust. - Remove Rationale on delete zero-output tx hack.
We don't care, as long as it's RBF-able. This will be nicer for Taproot when mutual closes are otherwise indistinguishable from normal spends. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Bitcoin Core version 25+ will not broadcast transactions containing `OP_RETURN` outputs if their amount is greater than 0, because this amount would then be unspendable. We thus require that the output amount is set to 0 when using `OP_RETURN`.
We always set `nSequence` to `0xFFFFFFFD`, but each node can choose the `nLockTime` they want to use for the transactions for which they are paying the fees.
- add more detailed protocol flow diagram - rename sigs TLVs as suggested by @morehouse - mention `upfront_shutdown_script` as suggested by @Crypt-iQ - fix typos - reformat
It was previously unclear whether a node could send `shutdown` and `closing_complete` immediately after that whenever RBF-ing their previous closing transaction. While this worked for non-taproot channels, it doesn't allow a clean exchange of fresh musig2 nonces for taproot channels. We now require that whenever a node wants to start a new signing round, `shutdown` must be sent *and* received before sending `closing_complete`.
Clarify strict `shutdown` exchange requirements and fix typos.
As pointed out by @TheBlueMatt, we can greatly simplify the protocol by removing the `shutdown` exchange entirely. The only piece of data nodes must remember is the last script their peer sent. This can be found in the last received `closing_complete`, or in `shutdown` if `closing_complete` was never received. This doesn't even need to be persisted, because on reconnection nodes will exchange `shutdown` again with the last script they want to use for their output. By doing that, the protocol becomes a trivial request/response protocol where nodes send `closing_complete` and expect `closing_sig` back. This creates a race condition when both nodes update their script at the same time, but this will be extremely rare so we can simply resolve it by reconnecting. This protocol is compatible with taproot channels, with the following additions: - when sending `shutdown`, nodes will include two random nonces: - `closer_nonce` that will be used in their `closing_complete` - `closee_nonce` that will be used in their `closing_sig` - when sending `closing_complete`, nodes will include a new random nonce for their next `closing_complete` (`next_closer_nonce`) - when sending `closing_sig`, nodes will include a new random nonce for their next `closing_sig` (`next_closee_nonce`) This ensures that nodes always have a pair of random nonces for their next signing round.
As suggested by @Roasbeef.
As suggested by @yyforyongyu.
922b61f
to
798bbc3
Compare
Rebased to fix trivial formatting conflict in Bolt 9. We have successfully done cross-compat tests between @tnull can you take a look at the PR before we merge? @yyforyongyu can you take a look at the responses to your last comments? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excuse the delay here. Basically LGTM.
|
||
Note: the details and requirements for the transaction being signed are in [BOLT 3](03-transactions.md#closing-transaction). | ||
|
||
An output is *dust* if the amount is less than the [Bitcoin Core Dust Thresholds](03-transactions.md#dust-limits). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, should this mentioned the exchanged dust_limit_satoshis
values for this channel in particular? And, would it mean that we'd check the counterparty's values against their dust_limit_satoshis
, and ours against ours?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to consider the dust_limit_satoshis
from open_channel
and accept_channel
during a simple close, we only need to consider the standard on-chain dust limits, which is simpler. If one side considers that its output is uneconomical even though it is greater than the standard on-chain dust limit, it will omit it when choosing which version of the closing transaction to sign. This way it is even more dynamic than using the channel's local/remote dust_limit_satoshis
while ensuring that we always produce a transaction that can be relayed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to consider the
dust_limit_satoshis
fromopen_channel
andaccept_channel
during a simple close, we only need to consider the standard on-chain dust limits, which is simpler. If one side considers that its output is uneconomical even though it is greater than the standard on-chain dust limit, it will omit it when choosing which version of the closing transaction to sign.
Ah, makes sense.
This way it is even more dynamic than using the channel's local/remote dust_limit_satoshis while ensuring that we always produce a transaction that can be relayed.
Well, we still might not meet the mempool-min fee... but yeah, I see your point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we still might not meet the mempool-min fee
But that's the responsibility of the closer
to pay enough fees to meet the mempool min fee, if it has enough balance in the channel to do so? If it doesn't have enough balance in the channel, it doesn't care about closing since it won't get any funds back...
Also, this protocol makes it easy to do RBF, so I don't think this should be an issue anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But that's the responsibility of the closer to pay enough fees to meet the mempool min fee, if it has enough balance in the channel to do so? If it doesn't have enough balance in the channel, it doesn't care about closing since it won't get any funds back...
Right, indeed the nice thing about this flow: it's simply the closer's own fault if it picks too low of a fee.
This PR is a continuation of #1096, that @rustyrussell asked me to take over. The original description was:
I believe it is still useful to review as separate commits: however, we initially allowed setting
nSequence
, which we removed in favor of settingnLockTime
. That part can probably be skipped. I squashed the fixup commits from the previous PR, but kept the rest.