-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial event handling implementation #10
Initial event handling implementation #10
Conversation
fb52b66
to
562ae2a
Compare
4815eba
to
3c4db1b
Compare
3c4db1b
to
77940d9
Compare
Rebased on main. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't review the event handling itself, will need to look at some of the dependent PRs for more context. The PR description should be updated to note what this currently depends on.
Hum, this doesn't really depend on anything now that #8 has been merged. However, essentially everything after (as in "issue number is higher") is dependent on this PR. |
Some of the components used within event handling don't exist in main yet, e.g., |
Ah, right, missed that one. Will update the description. |
8312428
to
fdef833
Compare
src/event.rs
Outdated
|
||
impl Writeable for PaymentSuccessfulEvent { | ||
fn write<W: Writer>(&self, writer: &mut W) -> Result<(), lightning::io::Error> { | ||
self.payment_hash.write(writer)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need some kind of upgradability story for all the events. IMO just do TLV-encoding for everything from day one, its simpler (but a bit slower...I really wish TLV encoding had a fixed-length type/length).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, IIUC, we'd follow the same approach as currently (as we can't yet make use of the macros), but just also write out the length prefix for each event?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, we could also expose the macro upstream? But, I mean, exact compat semantics is kinda up to you. #10 (comment)
locked_queue.0.front().unwrap().clone() | ||
} | ||
|
||
pub(crate) fn event_handled(&self) -> Result<(), Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if events are handled async? Dont we need some kind of EventId
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, I though we had discussed that through this (blocking) interface we want to force users to handle events sequentially, i.e., allow them to only make progress once they handled an event? IIUC, async event handling would mean we'd keep them in a persisted hash map and allow to query for a list of all unhandled events?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, fair enough. Shouldn't that mean that next_event
shouldn't return a new event unless all pending events have been handled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. It would currently always return the front of the queue until it is popped in event_handled
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, hmm, that's a confusing API IMO. I guess it depends on how this is exposed publicly, but needs good docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, def. need to document that behavior nicely. However, if the app crashes before the event is handled and we don't return the event again, we'd need yet another method to allow the user to look it up. I therefore think just returning the same event until handled is probably the most straightforward approach towards ensuring the user has to handle all events and can recover/resume after a failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests are a nice way to document. 🙂 Could you add some either here or possibly as doc tests, if that makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now added a unit test here for the EventQueue
with 1566dcb. (Also rebased #11 on top of this, so you can have a look at https://github.com/lightningdevkit/ldk-lite/actions/runs/3413412794/jobs/5680088292 to check the CI run)
As the documentation of the top-level API will be in lib.rs
I took a TODO to possibly add a doctest/example there.
src/event.rs
Outdated
let forwarding_channel_manager = self.channel_manager.clone(); | ||
let min = time_forwardable.as_millis() as u64; | ||
|
||
// TODO: any way we still can use tokio here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, please. Tokio has a nice simple timer interface to use here. I guess its not clear to me how these types are used so its less clear to me if its doable, but I assume we'll have a tokio runtime we can pass into the handler struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, will address this in an upcoming "make everything async" PR in which I'll also switch the calls to BDK over to async. Should be noted thought that this is not entirely trivial since the tokio runtime won't be available when we initialize the event handler, so we'll have to pass it around after the fact, i.e., when start()
gets called and the runtime is setup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And here's the upcoming PR: #26
a9fd684
to
a4922da
Compare
//6u8 => { | ||
// TODO OnChainPaymentReceived | ||
//} | ||
_ => Err(lightning::ln::msgs::DecodeError::InvalidValue), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to do the thing LDK/lightning does and have the ability to ignore some types for forwards compatibility?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I'm currently reconsidering whether I want to re-use upstream serialization at all. I currently considering introducing a new abstraction in which every field/value would need to have explicit TYPE
/SERIALIZATION_TYPE
/VERSION: u8
given and to also enforce that converting from one VERSION
to another has to be explicitly implemented. I hope this way I can escape the mess that is rust-lightning's serialization logic, where we have to remember any caveats for backwards/forwards compat at all times when introducing changes. Any thoughts on that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dunno, all options are terrible. Some old rust-lightning structs still have a version:min_version
two-byte prefix which goes "the version I am is X, the minimum version that can read this is Y", but that still requires the same level of awareness from devs. You could go the "no forwards-compatibility at all" route, which of course simplifies everything a ton, but then users can't run two copies of LDK on different devices and upgrade them out of lock-step, but maybe that's okay.
locked_queue.0.front().unwrap().clone() | ||
} | ||
|
||
pub(crate) fn event_handled(&self) -> Result<(), Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, fair enough. Shouldn't that mean that next_event
shouldn't return a new event unless all pending events have been handled?
3c705d4
to
a834254
Compare
cc83be6
to
799df1e
Compare
Alright, would be great to keep moving forward to escape rebase-hell. 😅 Squashed commits now:
|
aff85ae
to
1566dcb
Compare
//} | ||
} | ||
|
||
// TODO: Figure out serialization more concretely - see issue #30 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also consider using impl_writeable_tlv_based_enum
for enums.
src/event.rs
Outdated
.notifier | ||
.wait_while(self.queue.lock().unwrap(), |queue| queue.0.is_empty()) | ||
.unwrap(); | ||
locked_queue.0.front().unwrap().clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we store / return an Arc
to avoid cloning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done with a45eb9e.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now reverted this with 3b6ded0 for two reasons: a) I find using a plain Event
preferable in the interface but more importantly b) turns out that UniFFI (see #25) doesn't support Arc
'ed enum return values for the time being. However, it will itself wrap the Event
in an Arc
for the exposed interface, but sadly can't re-use the Arc
in the case of an enum.
src/event.rs
Outdated
} | ||
} | ||
|
||
impl<K: KVStorePersister> ReadableArgs<Arc<K>> for LdkLiteEventQueue<K> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll want a Writeable
implementation, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need Writeable
for EventQueue
? Readable
/Writeable
are already implemented for EventQueueSerWrapper
and the ReadableArgs
implementation of EventQueue
is merely a short form of reading the queue separately an passing it to something like EventQueue::new
, which allows us to never expose EventQueueSerWrapper
.
src/event.rs
Outdated
pub(crate) fn next_event(&self) -> Arc<LdkLiteEvent> { | ||
let locked_queue = self | ||
.notifier | ||
.wait_while(self.queue.lock().unwrap(), |queue| queue.is_empty()) | ||
.unwrap(); | ||
Arc::clone(&locked_queue.front().unwrap()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious how this would work with shutdown. If a user has a loop calling next_event
, do they need to wait for an event comes through to exit the loop? Would using wait_timeout_while
be preferable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, yeah, that is a good question regarding how we imagine the overall API to be used (i.e., if we expect the user side to entirely run single-threaded or if there would be a background process blocking on this). I'm not the biggest fan of timeouts, but maybe we should expose multiple versions of this via the main API? One blocking, one blocking for a given timeout, and one returning a Future
?
c2ae588
to
9d6324b
Compare
Alright, after I had to make #11 async-based already, this now also includes the necessary changes to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the commits are a little mixed up. Might as well squash them.
1ff2855
to
417eb55
Compare
Squashed and addressed latest feedback. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty much LGTM if you want to squash
src/event.rs
Outdated
if locked_runtime.as_ref().is_none() { | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, likely. Tagging @TheBlueMatt to confirm.
418dc62
to
e0bfb64
Compare
Squashed fixups without further changes. |
b2f3872
to
e775a6b
Compare
Based on #8. Dependent on #9.In this PR, we provide the event handling for LDK events and implement the structures and logic for emitting
LdkLiteEvent
s.