-
Notifications
You must be signed in to change notification settings - Fork 98
Make SpendableOutput claims more robust #103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
53dd65e
to
f643837
Compare
I did manage to get one claim through this pipeline on testnet, but not a batch one. Would appreciate some review but will hold it until i get a batch claim through to actually merge. |
src/main.rs
Outdated
let key = hex_utils::hex_str(&keys_manager.get_secure_random_bytes()); | ||
// Note that if the type here changes our read code needs to change as well. | ||
let output: SpendableOutputDescriptor = output; | ||
persister.persist(&format!("pending_spendable_outputs/{}", key), &output).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider letting the persister implementation decide what kind of encoding/formatting it wants
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The persister trait already allows arbitrary keys, this is us specifying our own for this particular implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean for the keys or for the object itself? Indeed, currently the persister trait allows arbitrary keys, we need to clean it up but for now this is an easy way to write a file durably (ie doing the stupid fsync-file-fsync-folder thing you have to do).
src/main.rs
Outdated
let pending_spendables_dir = format!("{}/pending_spendable_outputs", ldk_data_dir); | ||
let processing_spendables_dir = format!("{}/processing_spendable_outputs", ldk_data_dir); | ||
let spendables_dir = format!("{}/spendable_outputs", ldk_data_dir); | ||
let spending_keys_manager = Arc::clone(&keys_manager); | ||
let spending_logger = Arc::clone(&logger); | ||
tokio::spawn(async move { | ||
let mut interval = tokio::time::interval(Duration::from_secs(3600)); | ||
loop { | ||
interval.tick().await; | ||
if let Ok(dir_iter) = fs::read_dir(&pending_spendables_dir) { | ||
// Move any spendable descriptors from pending folder so that we don't have any | ||
// races with new files being added. | ||
for file_res in dir_iter { | ||
let file = file_res.unwrap(); | ||
// Only move a file if its a 32-byte-hex'd filename, otherwise it might be a | ||
// temporary file. | ||
if file.file_name().len() == 64 { | ||
fs::create_dir_all(&processing_spendables_dir).unwrap(); | ||
let mut holding_path = PathBuf::new(); | ||
holding_path.push(&processing_spendables_dir); | ||
holding_path.push(&file.file_name()); | ||
fs::rename(file.path(), holding_path).unwrap(); | ||
} | ||
} | ||
// Now concatenate all the pending files we moved into one file in the | ||
// `spendable_outputs` directory and drop the processing directory. | ||
let mut outputs = Vec::new(); | ||
if let Ok(processing_iter) = fs::read_dir(&processing_spendables_dir) { | ||
for file_res in processing_iter { | ||
outputs.append(&mut fs::read(file_res.unwrap().path()).unwrap()); | ||
} | ||
} | ||
if !outputs.is_empty() { | ||
let key = hex_utils::hex_str(&spending_keys_manager.get_secure_random_bytes()); | ||
persister | ||
.persist(&format!("spendable_outputs/{}", key), &WithoutLength(&outputs)) | ||
.unwrap(); | ||
fs::remove_dir_all(&processing_spendables_dir).unwrap(); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this would also be persister-implementation-specific
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessarily, we're already using a filesystem implementation of the persister and this specific way of batching only applies to us. We could definitely provide a crate/module to do this type of batching in a more agnostic way, but that would be as part of rust-lightning
.
src/main.rs
Outdated
let key = hex_utils::hex_str(&keys_manager.get_secure_random_bytes()); | ||
// Note that if the type here changes our read code needs to change as well. | ||
let output: SpendableOutputDescriptor = output; | ||
persister.persist(&format!("pending_spendable_outputs/{}", key), &output).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The persister trait already allows arbitrary keys, this is us specifying our own for this particular implementation.
src/main.rs
Outdated
let pending_spendables_dir = format!("{}/pending_spendable_outputs", ldk_data_dir); | ||
let processing_spendables_dir = format!("{}/processing_spendable_outputs", ldk_data_dir); | ||
let spendables_dir = format!("{}/spendable_outputs", ldk_data_dir); | ||
let spending_keys_manager = Arc::clone(&keys_manager); | ||
let spending_logger = Arc::clone(&logger); | ||
tokio::spawn(async move { | ||
let mut interval = tokio::time::interval(Duration::from_secs(3600)); | ||
loop { | ||
interval.tick().await; | ||
if let Ok(dir_iter) = fs::read_dir(&pending_spendables_dir) { | ||
// Move any spendable descriptors from pending folder so that we don't have any | ||
// races with new files being added. | ||
for file_res in dir_iter { | ||
let file = file_res.unwrap(); | ||
// Only move a file if its a 32-byte-hex'd filename, otherwise it might be a | ||
// temporary file. | ||
if file.file_name().len() == 64 { | ||
fs::create_dir_all(&processing_spendables_dir).unwrap(); | ||
let mut holding_path = PathBuf::new(); | ||
holding_path.push(&processing_spendables_dir); | ||
holding_path.push(&file.file_name()); | ||
fs::rename(file.path(), holding_path).unwrap(); | ||
} | ||
} | ||
// Now concatenate all the pending files we moved into one file in the | ||
// `spendable_outputs` directory and drop the processing directory. | ||
let mut outputs = Vec::new(); | ||
if let Ok(processing_iter) = fs::read_dir(&processing_spendables_dir) { | ||
for file_res in processing_iter { | ||
outputs.append(&mut fs::read(file_res.unwrap().path()).unwrap()); | ||
} | ||
} | ||
if !outputs.is_empty() { | ||
let key = hex_utils::hex_str(&spending_keys_manager.get_secure_random_bytes()); | ||
persister | ||
.persist(&format!("spendable_outputs/{}", key), &WithoutLength(&outputs)) | ||
.unwrap(); | ||
fs::remove_dir_all(&processing_spendables_dir).unwrap(); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessarily, we're already using a filesystem implementation of the persister and this specific way of batching only applies to us. We could definitely provide a crate/module to do this type of batching in a more agnostic way, but that would be as part of rust-lightning
.
); | ||
// Don't bother trying to announce if we don't have any public channls, though our | ||
// peers should drop such an announcement anyway. | ||
if chan_man.list_channels().iter().any(|chan| chan.is_public) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should also check it has 6 confs, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eh? I mean yes the announcement wont propagate until then, but I'm lazy. I commented it instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it should be at least list_usable_channels
, as otherwise we may end up broadcasting when we have no connected peers? And, if we miss it the first time around, we'd wait for an hour to try again? So maybe only try and tick if we have usable channels or !peer_man.get_node_ids().is_empty()
, and sleep a bit otherwise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see an issue with that – this is more about making sure the broadcast is valid. Also, list_usable_channels
isn't what we want here since we could be connected to non-channel peers and they should still receive our updates regardless of whether we're connected to our channel counterparties.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I think we want to check whether we have any online connection, doesn't have to be the one of the public channel.
The issue is that we may "waste" the broadcast tick when we're not connected and then only try again after an hour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no issue with "wasting" the broadcast tick as we don't try again until the next tick anyway. We could add a bunch of complexity and retry the claim after 10 seconds instead of an hour and check for if we have peers, but I'm really not convinced its worth it? This is only for public routing nodes anyway, which need to be online reliably with reasonable uptime, and broadcasts are only valid after an hour (6 blocks), so its not like we're really in a rush to get an announcement out super quick. We can just try again in an hour.
Squashed and pushed with some comment fixes:
|
); | ||
// Don't bother trying to announce if we don't have any public channls, though our | ||
// peers should drop such an announcement anyway. | ||
if chan_man.list_channels().iter().any(|chan| chan.is_public) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it should be at least list_usable_channels
, as otherwise we may end up broadcasting when we have no connected peers? And, if we miss it the first time around, we'd wait for an hour to try again? So maybe only try and tick if we have usable channels or !peer_man.get_node_ids().is_empty()
, and sleep a bit otherwise?
Should I go ahead and squash this? |
Excuse the delay! Yes, please go ahead! |
Even if we don't have any listen addresses, it's still useful to broadcast a node_announcement to get our node_features out there. Here we do this and also improve the timing of our node_announcement updates to be less spammy.
Rather than trying to sweep `SpendableOutputs` every time we see one, try to sweep them in bulk once an hour. Then, try to sweep them repeatedly every hour forever, even after they're claimed. Fixes lightningdevkit#102.
Rather than only trying once to claim a
SpendableOutputDescriptor
, try in a loop forever. This is important as feerates rise as a sweep may fail and fall out of the mempool to be lost forever.This is currently entirely untested, will need to do that before we land it.