Skip to content

Conversation

@tnull
Copy link
Collaborator

@tnull tnull commented Jul 31, 2025

Previously, we might have overlooked some cases that would exit the syncing methods and bubble up an error via ? instead of propagating to all subscribers.

Here, we split out the code to inner methods to ensure we always propagate.

Should be easy to review via git diff --color-moved --ignore-all-space

tnull added 2 commits July 31, 2025 15:04
Previously, we might have overlooked some cases that would exit the
method and bubble up an error via `?` instead of propagating to all
subscribers. Here, we split out the code to an inner method to ensure we
always propagate.
Previously, we might have overlooked some cases that would exit the
method and bubble up an error via `?` instead of propagating to all
subscribers. Here, we split out the code to an inner method to ensure we
always propagate.
@tnull tnull requested a review from joostjager July 31, 2025 13:06
@tnull tnull self-assigned this Jul 31, 2025
@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Jul 31, 2025

👋 Thanks for assigning @joostjager as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@tnull tnull moved this to Goal: Merge in Weekly Goals Jul 31, 2025
Err(e) => {
log_error!(self.logger, "Failed to poll for chain data: {:?}", e);
let res = Err(Error::TxSyncFailed);
self.wallet_polling_status.lock().unwrap().propagate_result_to_subscribers(res);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup here, does that mean that the previous commit would send redundant updates?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite sure what you mean? We previously propagated manually whenever erroring out. But now we can just bubble up the error as we'll propagate to subscribers in the wrapping method.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but before the change in this commit (this specific commit), was the propagate method being called twice? I am wondering if all commits in the stack are leaving the code in a correct state.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, that might be true. Happy to squash the cleanup if you prefer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now went ahead and did so

);
*self.latest_chain_tip.write().unwrap() = Some(tip);

periodically_archive_fully_resolved_monitors(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Context question: channel manager already knows the best block and I assume it is updated somehow, but still we need an external trigger here to archive? It seems responsibility is now with two units instead of one.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far we left when/how to archive to the users in LDK. I guess we could consider making it part of the background processor eventually, but I don't think ChannelManager could do it (in absence of KVStore access, etc)

Copy link
Contributor

@joostjager joostjager Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I was confused a bit about the two sources of block information being used (here the connection event, and then the height in cm). But if this is how it is done for now, it's ok.

@tnull tnull force-pushed the 2025-07-fix-res-propagation-sync branch from cf990f8 to aae6e5d Compare August 1, 2025 14:03
@tnull tnull requested a review from joostjager August 1, 2025 14:05
@tnull
Copy link
Collaborator Author

tnull commented Aug 1, 2025

Addressed pending comments.

Copy link
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. One comment outstanding about the stability of an intermediate commit, but non blocking.

tnull added 4 commits August 1, 2025 16:39
Previously, we might have overlooked some cases that would exit the
method and bubble up an error via `?` instead of propagating to all
subscribers. Here, we split out the code to an inner method to ensure we
always propagate.
Previously, we might have overlooked some cases that would exit the
method and bubble up an error via `?` instead of propagating to all
subscribers. Here, we split out the code to an inner method to ensure we
always propagate.
Previously, we might have overlooked some cases that would exit the
method and bubble up an error via `?` instead of propagating to all subscribers.
Here, we split out the code to an inner method to ensure we always
propagate.
@tnull tnull force-pushed the 2025-07-fix-res-propagation-sync branch from aae6e5d to a6349a4 Compare August 1, 2025 14:39
@tnull tnull merged commit d2cadd0 into lightningdevkit:main Aug 1, 2025
19 of 24 checks passed
@github-project-automation github-project-automation bot moved this from Goal: Merge to Done in Weekly Goals Aug 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants