-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[r2r] Lightning batch transactions confirmations, spv (wip), refactors #1339
Conversation
…ightning-persister v0.0.106 and lightning-background-processor v0.0.106 wip
…from our codebase and use v0.0.106 crates instead
…onfirmations in batches
… db struct and filesystem struct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! It's the first review iteration, mostly questions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! First review iteration.
@artemii235 @ozkanonur this is ready for a second review iteration. Only this comment is unresolved #1339 (comment) since it will be left as is until I can fix it on LDK side if it can be fixed, planning to investigate it next week :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next review iteration 🙂
|
||
async fn get_tx_height(&self, tx: &UtxoTx) -> Result<u64, MmError<GetTxHeightError>> { | ||
for output in tx.outputs.clone() { | ||
let script_pubkey_str = hex::encode(electrum_script_hash(&output.script_pubkey)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of requesting the history of some address (that can be very large), it should be possible to get verbose tx and find the corresponding block height in DB by block_hash
. This way, you can also pass transaction hash as an argument instead of entire tx.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will be able to do this only if block_headers_storage
https://github.com/KomodoPlatform/atomicDEX-API/blob/510578bc5e1d09f7def5d087b28386c9f787156f/mm2src/coins/utxo/rpc_clients.rs#L1568 is enabled through block_header_params
in electrum request https://github.com/KomodoPlatform/atomicDEX-API/blob/510578bc5e1d09f7def5d087b28386c9f787156f/mm2src/coins/utxo.rs#L1296 this was how it was designed in the previous implementation of spv proof, but block_header_params
was part of coin config instead of electrum requests, I moved it to electrum requests because it made much more sense as block header storage is only needed for light clients.
Getting tx height through the history of the address should be kept as backup in case block_headers_storage
is not enabled, or the corresponding block header is not saved to storage yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done as I suggested above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assumed that this fn will be removed at all 🙂 I think it should be possible if this change is implemented #1339 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fixes! 2nd review iteration.
@artemii235 @ozkanonur This is ready for next review iteration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple more notes.
@@ -432,12 +458,23 @@ impl LightningEventHandler { | |||
}, | |||
}; | |||
|
|||
let claiming_txid = claiming_tx.txid(); | |||
let tx_hex = serialize_hex(&claiming_tx); | |||
if let Err(e) = tokio::task::block_in_place(move || self.platform.coin.send_raw_tx(&tx_hex).wait()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
handle_spendable_outputs
is not async, why block_in_place
is required here? Is an event handling task spawned on our async executor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The BackgroundProcessor
loop is spawned inside the start
function https://github.com/lightningdevkit/rust-lightning/blob/d26640446f54342184fdfee9cc27efe9ce8f290e/lightning-background-processor/src/lib.rs#L208 which we use here https://github.com/KomodoPlatform/atomicDEX-API/blob/3fd5e14088ff9523873eb7ddeb4ca3b7c56ed6b4/mm2src/coins/lightning.rs#L709
Then event handling is done inside this function https://github.com/lightningdevkit/rust-lightning/blob/d26640446f54342184fdfee9cc27efe9ce8f290e/lightning-background-processor/src/lib.rs#L221 by calling handle_event
https://github.com/lightningdevkit/rust-lightning/blob/d26640446f54342184fdfee9cc27efe9ce8f290e/lightning/src/ln/channelmanager.rs#L5535-L5537 which is the function that calls handle_spendable_outputs
https://github.com/KomodoPlatform/atomicDEX-API/blob/3fd5e14088ff9523873eb7ddeb4ca3b7c56ed6b4/mm2src/coins/lightning/ln_events.rs#L65
I actually discovered this very early before using block_in_place
inside get_est_sat_per_1000_weight
https://github.com/KomodoPlatform/atomicDEX-API/blob/3fd5e14088ff9523873eb7ddeb4ca3b7c56ed6b4/mm2src/coins/lightning/ln_platform.rs#L422 because disconnections with peers that I wanted to open a channel with happened when estimating fees for channel opening because ping/pong messages weren't sent/received in time due to blocking, which made channel opening always fail due to peer disconnections. This was solved when I used block_in_place
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The BackgroundProcessor loop is spawned inside the start function https://github.com/lightningdevkit/rust-lightning/blob/d26640446f54342184fdfee9cc27efe9ce8f290e/lightning-background-processor/src/lib.rs#L208 which we use here
As I can see, this is started in a separate thread. How come that
channel opening because ping/pong messages weren't sent/received in time due to blocking
Is there a chance that these somehow reach tokio async runtime, blocking it? block_in_place
simply blocks the current thread if not running on tokio runtime:
https://github.com/tokio-rs/tokio/blob/master/tokio/src/runtime/thread_pool/worker.rs#L316
https://github.com/tokio-rs/tokio/blob/master/tokio/src/runtime/thread_pool/worker.rs#L355
So for blocking fns running in a separate thread, block_in_place
should have no effect, but it does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently block_in_place
was allowed to be used outside the tokio runtime in this PR tokio-rs/tokio#2410, I am still trying to figure out how this works :)
Also, to give more context on why I used block_in_place
here https://github.com/KomodoPlatform/atomicDEX-API/blob/3fd5e14088ff9523873eb7ddeb4ca3b7c56ed6b4/mm2src/coins/lightning/ln_platform.rs#L422 As far as I remember it was because of the advice in this issue tokio-rs/tokio#2289. The issue was closed after the above PR got merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently block_in_place was allowed to be used outside the tokio runtime in this PR tokio-rs/tokio#2410, I am still trying to figure out how this works :)
Yes, it simply blocks the current thread if not executed on the async runtime. But if this is required, then I assume it somehow reaches tokio runtime, which shouldn't happen. I'm mostly curious why it happens than considering this a real problem, though 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comprehensive research! 🙂
In conclusion, having a variable that stores the fees every few seconds and using it inside get_est_sat_per_1000_weight as stated above can solve a lot of problems, and only one task will be spawned for this.
Yeah, seems a solution 🙂 A small drawback is you will likely have to call rpc_client().estimate_fee_sat
multiple times per each iteration for each possible ConfirmationTarget
variant. Does it have to be called every few seconds? How frequently and significantly fee changes on BTC network?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How frequently and significantly fee changes on BTC network?
By quickly checking the bitcoin code, The fee seems to be the max of min_mempool_feerate
, min_relay_feerate
https://github.com/bitcoin/bitcoin/blob/6dc3084eec912cf2abfe18d1c05655defaa45e20/src/rpc/fees.cpp#L92
And the min_mempool_feerate
gets recalculated every 10 seconds or more https://github.com/bitcoin/bitcoin/blob/948f5ba6363fcc64f95fed3f04dbda3d50d61827/src/txmempool.cpp#L1110 and it depends on the size of the mempool, so if a surge of transactions arrive at once it can change significantly on the next recalculation.
Does it have to be called every few seconds?
Of Course not, but I am not sure what the best interval can be, 10 seconds is the safest but I think it can be increased. Also this might be complicated but maybe there is a calculation for the ConfirmationTarget
that can be done on our code while requesting the fee only one time instead of 3 times (one per every ConfirmationTarget
), I will look into it :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PS you can do it on the next iteration 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also this might be complicated but maybe there is a calculation for the ConfirmationTarget that can be done on our code while requesting the fee only one time instead of 3 times (one per every ConfirmationTarget)
Unfortunately, this can't be done since a sample of the txs in the mempool is used in calculating the fee for a given ConfirmationTarget
https://github.com/bitcoin/bitcoin/blob/6dc3084eec912cf2abfe18d1c05655defaa45e20/src/rpc/fees.cpp#L88
https://github.com/bitcoin/bitcoin/blob/6dc3084eec912cf2abfe18d1c05655defaa45e20/src/policy/fees.cpp#L865
https://github.com/bitcoin/bitcoin/blob/6dc3084eec912cf2abfe18d1c05655defaa45e20/src/policy/fees.cpp#L769
https://github.com/bitcoin/bitcoin/blob/6dc3084eec912cf2abfe18d1c05655defaa45e20/src/policy/fees.cpp#L269-L270
This will also lead to the fee for a given target changing in less than 10 seconds depending on the txs in the mempool
There is no alternative to an estimate fee request for every ConfirmationTarget
:(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PS you can do it on the next iteration 🙂
Added this to the issue's checklist #1045 (comment)
@@ -1678,6 +1707,9 @@ impl ElectrumClientImpl { | |||
|
|||
/// Get available protocol versions. | |||
pub fn protocol_version(&self) -> &OrdRange<f32> { &self.protocol_version } | |||
|
|||
/// Get block headers storage. | |||
pub fn block_headers_storage(&self) -> &Option<BlockHeaderStorage> { &self.block_headers_storage } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems worth making BlockHeaderStorage
non-optional. I guess we would like to enable SPV proof validation for as many coins as possible, so initializing storage by default makes sense. It should also ease the handling in various places, avoiding match
and if let Some(_) = ...
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are multiple implications to this.
First is storage requirements for mobile and web DEXes, although bitcoin block headers consumes space in the range of 50MB, that's not the case for other UTXO coins that have a lot less blocktimes, to solve this we can just keep the last few validated headers and remove the rest since we wouldn't need them to validate swaps or channels transactions after some time is passed.
The second problem is that all block headers needs to be downloaded before validating a transaction the first time the app is used, as a solution we can provide a validated block header information that is recent in the coins config that we can start the download from as we discussed before, but we would need to update this in the coins config periodically. Also, when validating a transaction we have to wait for block headers sync first, it would be ideal to sync block headers when enabling a coin but this can be done only in the coins that use task manager in enabling them.
In conclusion, making BlockHeaderStorage
non-optional might require a lot of code changes and still requires more thinking about it's implications, so it might be best to move this to the next sprint and not make it part of this PR since SPV validation is not finished yet as I need to use the difficulty calculations in validating headers also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, when validating a transaction we have to wait for block headers sync first, it would be ideal to sync block headers when enabling a coin but this can be done only in the coins that use task manager in enabling them.
If SPV proof validation is disabled, we can still initialize the storage without starting sync and using the storage space. It might be confusing, though, but will still allow avoiding if let Some(storage)
in many places - it is just an overhead because storage must be initialized if SPV proof validation is turned on. There should be a way to handle it without an Option
. But yeah, you can do it on the next sprint 🙂
coins config that we can start the download from as we discussed before, but we would need to update this in the coins config periodically.
I might be wrong, but I think I didn't mention that we need to download block headers partially. I meant to have a specific block difficulty as anchor for PoW validation, but still have all headers in DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But yeah, you can do it on the next sprint 🙂
Added this to the issue's checklist #1045 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some TODOs still, but these are not blockers for this PR.
@ozkanonur There is pending changes request from you, could you please check if it was resolved? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fixes. LGTM 🔥
#1045
Persister
andPersist
traits to enable backup instead.join_all
spv_validation
crate, using next block difficulty to validate headers should be done in the next sprint.